Misplaced Pages

Artificial consciousness: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 19:12, 30 November 2004 edit80.3.32.9 (talk) Information processing and consciousness← Previous edit Revision as of 22:30, 30 November 2004 edit undoTkorrovi (talk | contribs)Extended confirmed users1,655 edits 28 Nov 2004 by 80.3.32.10 998 words removed, 30% of 3395 there was, too much content removed. Reverted to the version by Sam Hocever from 17 Oct 2004, edit from that, solve neutrality dispute first.Next edit →
Line 1: Line 1:
{{NPOV}}
The idea that people may create devices that are ] is known as '''Artificial consciousness''' (AC). This is an ancient idea, perhaps dating back to the ancient Greek ] myth in which conscious people were supposedly manufactured from clay, pottery being an advanced technology in those days. In modern science fiction artificial people or conscious beings are described as being manufactured from electronic components.


'''Artificial consciousness''' (AC) encompasses '''digital sentience''' and '''simulated consciousness'''. It is a term that would describe artificial systems (e.g. ]s) that simulate some degree of ] or ].
The idea of artificial consciousness is an interesting philosophical problem in the twenty first century because, with increased understanding of genetics, neuroscience and information processing it may soon be possible to create an entity that is conscious.


Simulated consciousness may not always be real consciousness. But one school of thought (see below) holds that some AC might be genuinely conscious. Therefore the terms ''artificial consciousness'' and ''simulated consciousness'' are not equivalent. Digital sentience assumes that the artificial consciousness is exhibited by a computer (or a system with a computer as its "brain"): The possibility of a man-made yet biological (e.g.) system being conscious demonstrates that artificial consciousness is not equivalent to digital sentience.
The simplest way to create such a being would be to manufacture a genome that had the genes necessary for a human brain and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing?


==Description==
== The nature of consciousness ==


In ], the term '''digital sentience''' is used to describe the concept that digital ]s could someday be capable of independent ]. Digital sentience, if it ever comes to exist, is likely to be a form of ]. A generally accepted criterion for ] is '']'' and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of ''conscious'' can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" ().
Consciousness is described at length in the ] article in Misplaced Pages. According to ] and ] we perceive things in the world directly and our brains perform processing. On the other hand, according to ] and ] our brains contain data about the world and what we perceive is some sort of mental model that appears to overlay physical things as a result of projective geometry (such as the point observation in ] dualism). Which of these general approaches to consciousness is correct has not been resolved and is the subject of fierce debate.


In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness. Another definition of the word ''conscious'' is: "Possessing knowledge, whether by internal, conscious experience or by external observation; cognizant; aware; sensible" ().
The theory of direct perception is problematical because it would seem to require some new physical theory that allows conscious experience to ] directly on the world outside the brain. On the other hand, if we perceive things indirectly, via a model of the world in our brains, then some new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience.


==Aspects of AC==
If we perceive things directly ] is difficult to explain because one of the principle reasons for proposing direct perception is to avoid ] where internal processing becomes an infinite loop or ]. The belief in direct perception also demands that we cannot 'really' be aware of dreams, imagination, mental images or any inner life because these would involve recursion. As mentioned above, proponents of indirect perception suggest some phenomenon, either physical or dualist to prevent the recursion.


There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; these are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however; there are many others that are not covered.
If we perceive things indirectly then self awareness would result from the extension of experience in time described by ], ] and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics (see ].


The ability to predict (or anticipate) foreseeable events is considered a highly desirable attribute of AC by ]: He writes in '''': "Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness." The multiple drafts principle proposed by ] in '']'' may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment. The ability to predict may be interpreted as the ability to ] the external ]s in every possible ] when it is possible to predict for capable ], but this is not widely accepted.
== Information processing and consciousness ==


] consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a ]. In principle the carrier could be anything, even steel balls or onions, and the machine that implements the instructions need not be electronic, it could be mechanical or fluidic.


Consciousness is sometimes as ]. While self-awareness is very important, it may be subjective and is generally difficult to test.
Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was ]. The Misplaced Pages article on ] (AI) considers this problem in depth.


Another test of AC, in the opinion of some, should include a demonstration that machine is capable to learn the ability to filter out certain ] in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.
If technologists were limited to the use of the principles of digital computing when creating a conscious entity they would have the problems associated with the philosophy of ]. The most serious problem is ]'s ] argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc.


] could be another required aspect. However, again, there are some problems with the exact definition of ''awareness''. To illustrate this point the ] ] controversially argues that the ] could be considered conscious: It knows if it is too hot, too cold, or at the correct temperature. The results of the experiments of neuroscanning on monkeys suggest that a process, not a state or object activates neurons . For such reaction there must be created a model of the process based on the information received through the senses, creating models such way demands a lot of flexibility, and is also useful for making predictions.
Searle's objection does not convince those who believe in direct perception because they would maintain that 'meaning' is only to be found in the objects of perception, which they believe are the world itself.


It is interesting that the misnomer ''digital sentience'' is sometimes used in the context of artificial intelligence research. ] means the ability to feel or perceive in the absence of thoughts, especially inner speech. It draws attention to the way that conscious experience is a state, it consists of things laid out in time and space and is more than the simple processes that occur in digital computers.


Personality is another characteristic that is generally considered vital within consciousness. In the area of ], there is a somewhat popular theory that personality is an ] created by the ] in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have ]. An artificially conscious machine may need to have a personality capable of expression such that human ]s can interact with it in a meaningful way. However, this is often questioned by ]s; the ], which measures a machine's personality, is not considered generally useful any more.
== Artificial consciousness beyond information processing ==


Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs ] components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.
The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between ] and ]. Dualists believe that there is something non-physical about consciousness whilst physicalists hold that all things are physical.


==Schools of thought==
Those who believe that consciousness is physical are not limited to those who hold that consciousness is a property of encoded information on carrier signals. Several indirect realist philosophers and scientists have proposed that, although information processing might deliver the content of consciousness, the state that is consciousness is due to some other physical phenomenon. The eminent neurologist ] was of this opinion and scientists such as ], ], ], Karl Pribram and Henry Stapp amongst many others, have also proposed that consciousness involves physical phenomena that are more subtle than simple information processing. As was mentioned above, neither the ideas that involve direct perception nor those that involve models of the world in the brain seem to be compatible with current physical theory and no amount of information processing is likely to resolve this problem. It seems that new physical theory may be required and the possibility of dualism is not, as yet, ruled out.


There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms ''Genuine'' and ''Not-genuine'' refer not to the capability of the artificial consciousness but to its ''reality'' (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real. E.g. '''Some''' believers in Genuine AC say the thermostat is really conscious but they do not claim the thermostat is capable of an . In an interview Chalmers called his statement that thermostat is conscious "very speculative".
==Testing for artificial consciousness==


===Objective less Genuine AC===
Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation.


By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the ] demands, but by ] consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.
The ] is a proposal for identifying machine ] as determined by a machine's ability to interact with a person. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

An AC system must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong ], but this also depends on how strong AI is defined.

===Not-genuine AC===

Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other ] beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. By more strict view, subject of another field as AI should not be subject of AC, so by that only a study what(?) cannot be categorized anywhere else, such as artificial emotions, can be considered "Not-genuine AC".

===Genuine AC===

Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally. The argument in favour of Genuine AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that the ] is not a ], albeit a biological one. If there is something which is not a machine about a human then it must be the ] (religion) or a magic spark (]) and science is bypassed. Some have proposed that the merger of the science of quantum mechanics and string theories or m-brane models of the universe would supply a third point or quantum link for special home for the human soul or special conscious connection. This does not block the development of artificial conscious in a quantum computer but would support it as a natural development. Therefore ordinary computers simulating conscious would not be really conscious but quantum computers would be.

This ] view, that the human being is truly a real machine, prompts us to ask what type of machine the brain is. That the brain is a machine of the Turing type is assumed because no more powerful computing paradigm has been discovered and all that is known about the brain (admittedly not very much), in the mainstream view, does nothing to contradict the supposition.

If this supposition is correct then the ] applies and the possibility of Genuine AC being implemented on another machine of the Turing type must be admitted.

It is argued that until contradictory evidence is discovered, ] and the ] support the view that AC can be real consciousness and that the building of AC which is real consciousness is likely: The argument goes: The human being is nothing but a machine. The Church-Turing thesis states that we need new ] before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated or special (or some other similar term) for their brains to be built or copied artificially, or for an alternative artificial achitecture to the brain to be truly capable of consciousness.

The Genuine AC view assumes that anything that cannot be modelled by AC must be in contradiction with ], but ] in his essay, '''', argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism. In essence, Nagel is claiming subjective experience is impossible for a machine and therefore Genuine AC is similarly impossible.

] and ] rebuttal in their book, ''The Mind's I'', is as in the first paragraph of this section: They say that ''subjective experience'' is Nagel's call to metaphysics, his "magic spark".

===Human-like AC===

As is to be expected, the Not-genuine and Genuine schools of thought differ on the question of whether artificial consciousness needs to be human-like or whether it could be of an entirely different nature. Proponents of Genuine AC generally hold the view that artificial consciousness does not need to be similar to human consciousness, which, according to some, is a very questionable view and a reference is required. Some who hold that artificial consciousness can never be really conscious (i.e., Not-genuine AC proponents) hold that AC, not being ''real'', can only be human-like because that is the only "''true''" model of consciousness that we will (most likely) ever have, and that this Not-genuine AC will be tested based on what we know about real consciousness.

===Nihilistic view===

It is impossible to test if anything is conscious. To ask a thermometer to appreciate music is like asking a human to think in five dimensions. It is unnecessary for humans to think in five dimensions, as much as it is irrelevant for thermostats to understand music. Consciousness is just a word attributed to things that appear to make their own choices and perhaps things that are too complex for our mind to comprehend. Things seems to be conscient, but that is just because our morale tells us to believe in it, or because of our feelings for other things. Consciousness is an illusion.

===Alternative Views===

One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss ] argument ''"]"'', would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing ]al ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The ''richness'' or ''completeness'' of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete.

Today's computers are not generally considered conscious. A ] (or derivative thereof) computer's response to the <code>wc -w</code> command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the <code>top</code> command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.

==Testing AC==

Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues ] and the particular attributes of ] and ] necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be. Obviously, if the criteria are too stringent then counter-examples citing people who would themselves fail to meet such criteria can be posited to invalidate a particular test. On the other hand, the formulation of a minimum set of requirements, to act as design objectives for any implementation, can be helpful in order to conjecture that the failure to meet all such requirements would result in the failure of the implementation to be deemed conscious by observers.

The Turing test, as mentioned before, is a proposal for identifying machine ] as determined by a machine's ability to interact with a person. The ] argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.


A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machince could be conscious and not be able to pass the Turing test. A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machince could be conscious and not be able to pass the Turing test.


People would not deny that ] was conscious when he wrote ''Under The Eye of The Clock'', and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of ] and ], two other highly acclaimed Irish authors. This shows again that AC must be theoretically capable of achieving mental abilities, and test may fail if it doesn't have enough resources like memory or sensors. If test fails on that reason, then it doesn't prove that an entity is not AC, but also doesn't prove otherwise.
As mentioned above, the ] argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be conscious.


Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.


Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.
Indeed, for those who argue for indirect perception no test of behaviour can prove or disprove the existence of consciousness because a conscious entity can have dreams and other features of an inner life. This point is made forcibly by those who stress the subjective nature of conscious experience such as ] who, in his essay, '''', argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism.


As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.
Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, the failure of any particular test would not disprove consciousness. Ultimately it will only be possible to assess whether a machine is conscious when a universally accepted understanding of consciousness is available.

Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans). Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in '']'', could count - not as examples of artificial consciousness, since their personalities are generated by actors, but as models. Hence, some consider that if someone were to produce an ''artificial'' C-3PO, which behaved just like the ''real'' one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious. This argument is very questionnable as it doesn't refer who and where loosened that constraint, it's not known that anybody did that.

Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough ].


==Artificial consciousness as a field of study== ==Artificial consciousness as a field of study==
Line 55: Line 97:
The term "artificial consciousness" was used by several scientists including ] Igor Aleksander, a faculty member at the ] in ], ], who stated in his book ''Impossible Minds'' that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand ]. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so. The term "artificial consciousness" was used by several scientists including ] Igor Aleksander, a faculty member at the ] in ], ], who stated in his book ''Impossible Minds'' that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand ]. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so.


Artificial consciousness has so far been an elusive goal, and a vague and poorly understood one at that. Since the ], computer scientists, ]s, philosophers, and ] ]s have debated the meaning, possibilities and the question of what would constitute digital sentience. Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the ], computer scientists, ]s, philosophers, and ] ]s have debated the meaning, possibilities and the question of what would constitute digital sentience.

== The ethics of artificial consciousness ==


At this time analog holographic sentience modeled after humans is more like to be a successful approach.
In the absence of a true physical understanding of consciousness researchers do not even know why they want to construct a machine that is conscious. If it was certain that a particular machine was conscious it would probably need to be given rights under law and could not be used as a slave.


==Artificial consciousness in literature and movies== ==Artificial consciousness in literature and movies==
Line 87: Line 127:


==See also== ==See also==
*]
*] *]
*] *]

Revision as of 22:30, 30 November 2004

The neutrality of this article is disputed. Relevant discussion may be found on the talk page. Please do not remove this message until conditions to do so are met. (Learn how and when to remove this message)

Artificial consciousness (AC) encompasses digital sentience and simulated consciousness. It is a term that would describe artificial systems (e.g. robots) that simulate some degree of consciousness or sentience.

Simulated consciousness may not always be real consciousness. But one school of thought (see below) holds that some AC might be genuinely conscious. Therefore the terms artificial consciousness and simulated consciousness are not equivalent. Digital sentience assumes that the artificial consciousness is exhibited by a computer (or a system with a computer as its "brain"): The possibility of a man-made yet biological (e.g.) system being conscious demonstrates that artificial consciousness is not equivalent to digital sentience.

Description

In computer science, the term digital sentience is used to describe the concept that digital computers could someday be capable of independent thought. Digital sentience, if it ever comes to exist, is likely to be a form of artificial intelligence. A generally accepted criterion for sentience is self-awareness and this is also one of the definitions of consciousness. To support the concept of self-awareness, a definition of conscious can be cited: "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com).

In more general terms, an AC system should be theoretically capable of achieving various or by a more strict view all verifiable, known, objective, and observable aspects of consciousness. Another definition of the word conscious is: "Possessing knowledge, whether by internal, conscious experience or by external observation; cognizant; aware; sensible" (public domain 1913 Webster's Dictionary).

Aspects of AC

There are various aspects and/or abilities that are generally considered necessary for an AC system, or an AC system should be able to learn them; these are very useful as criteria to determine whether a certain machine is artificially conscious. These are only the most cited, however; there are many others that are not covered.

The ability to predict (or anticipate) foreseeable events is considered a highly desirable attribute of AC by Igor Aleksander: He writes in Artificial Neuroconsciousness: An Update: "Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness." The multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment. The ability to predict may be interpreted as the ability to predict the external events in every possible environment when it is possible to predict for capable human, but this is not widely accepted.


Consciousness is sometimes defined as self-awareness. While self-awareness is very important, it may be subjective and is generally difficult to test.

Another test of AC, in the opinion of some, should include a demonstration that machine is capable to learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test.

Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness. To illustrate this point the philosopher David Chalmers controversially argues that the thermostat could be considered conscious: It knows if it is too hot, too cold, or at the correct temperature. The results of the experiments of neuroscanning on monkeys suggest that a process, not a state or object activates neurons . For such reaction there must be created a model of the process based on the information received through the senses, creating models such way demands a lot of flexibility, and is also useful for making predictions.


Personality is another characteristic that is generally considered vital within consciousness. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more.

Anticipation is the final characteristic that could possibly be used to define artificial consciousness. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world.

Schools of thought

There are several commonly stated views regarding the plausibility and capability and of AC, and the likelihood that AC will ever be real consciousness. Note that the terms Genuine and Not-genuine refer not to the capability of the artificial consciousness but to its reality (how close it is to real consciousness). Believers in Genuine AC think that AC can (one day) be real. Believers in Not-genuine AC think it never can be real. E.g. Some believers in Genuine AC say the thermostat is really conscious but they do not claim the thermostat is capable of an appreciation of music. In an interview Chalmers called his statement that thermostat is conscious "very speculative".

Objective less Genuine AC

By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.

An AC system must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial, and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, AC may considered to be a strong artificial intelligence, but this also depends on how strong AI is defined.

Not-genuine AC

Artificial consciousness will never be real consciousness, but merely an approximation of it; it only mimics something that only humans (and some other sentient beings) can truly experience or manifest. Currently, this is the state of artificial intelligence and holders of the Not-genuine AC hypothesis believe that this will always be the case. No computer has been able to pass the somewhat vague Turing test, which would be a first step to an AI that contains a "personality"; this would perhaps be one path to a Genuine AC. By more strict view, subject of another field as AI should not be subject of AC, so by that only a study what(?) cannot be categorized anywhere else, such as artificial emotions, can be considered "Not-genuine AC".

Genuine AC

Proponents of this view believe that artificial consciousness is (or will be) real consciousness, albeit one that has not arisen naturally. The argument in favour of Genuine AC is essentially this: If artificial consciousness is not real consciousness because it is exhibited by a machine then we must assume that the human is not a machine, albeit a biological one. If there is something which is not a machine about a human then it must be the soul (religion) or a magic spark (metaphysics) and science is bypassed. Some have proposed that the merger of the science of quantum mechanics and string theories or m-brane models of the universe would supply a third point or quantum link for special home for the human soul or special conscious connection. This does not block the development of artificial conscious in a quantum computer but would support it as a natural development. Therefore ordinary computers simulating conscious would not be really conscious but quantum computers would be.

This functionalist view, that the human being is truly a real machine, prompts us to ask what type of machine the brain is. That the brain is a machine of the Turing type is assumed because no more powerful computing paradigm has been discovered and all that is known about the brain (admittedly not very much), in the mainstream view, does nothing to contradict the supposition.

If this supposition is correct then the Church-Turing thesis applies and the possibility of Genuine AC being implemented on another machine of the Turing type must be admitted.

It is argued that until contradictory evidence is discovered, Occam's Razor and the Copernican principle support the view that AC can be real consciousness and that the building of AC which is real consciousness is likely: The argument goes: The human being is nothing but a machine. The Church-Turing thesis states that we need new physics before two computing machines are different; by Occam's Razor, we should not posit new physics without good reason. The Copernican principle states that we should claim no special position for human beings without good reason. The only "good" reasons we have are those of arrogance: Humans are supposedly too complicated or special (or some other similar term) for their brains to be built or copied artificially, or for an alternative artificial achitecture to the brain to be truly capable of consciousness.

The Genuine AC view assumes that anything that cannot be modelled by AC must be in contradiction with physicalism, but Thomas Nagel in his essay, What is it like to be a bat?, argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism. In essence, Nagel is claiming subjective experience is impossible for a machine and therefore Genuine AC is similarly impossible.

Dennett and Hofstadter's rebuttal in their book, The Mind's I, is as in the first paragraph of this section: They say that subjective experience is Nagel's call to metaphysics, his "magic spark".

Human-like AC

As is to be expected, the Not-genuine and Genuine schools of thought differ on the question of whether artificial consciousness needs to be human-like or whether it could be of an entirely different nature. Proponents of Genuine AC generally hold the view that artificial consciousness does not need to be similar to human consciousness, which, according to some, is a very questionable view and a reference is required. Some who hold that artificial consciousness can never be really conscious (i.e., Not-genuine AC proponents) hold that AC, not being real, can only be human-like because that is the only "true" model of consciousness that we will (most likely) ever have, and that this Not-genuine AC will be tested based on what we know about real consciousness.

Nihilistic view

It is impossible to test if anything is conscious. To ask a thermometer to appreciate music is like asking a human to think in five dimensions. It is unnecessary for humans to think in five dimensions, as much as it is irrelevant for thermostats to understand music. Consciousness is just a word attributed to things that appear to make their own choices and perhaps things that are too complex for our mind to comprehend. Things seems to be conscient, but that is just because our morale tells us to believe in it, or because of our feelings for other things. Consciousness is an illusion.

Alternative Views

One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favour of the machine's consciousness. A conscious machine could even argue that because it is a machine, it cannot be conscious. Consciousness does not imply unfailing logical ability. If we look at the dictionary definition, we find that consciousness is self-awareness: a totality of thought and experience. The richness or completeness of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete.

Today's computers are not generally considered conscious. A Unix (or derivative thereof) computer's response to the wc -w command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the top command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness (and therefore consciousness) by definition.

Testing AC

Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. Depending upon one's standpoint regarding what constitues consciousness and the particular attributes of sentience and sapience necessary to demonstrate it, there are various views on what the test acceptance criteria for artificial consciousness should be. Obviously, if the criteria are too stringent then counter-examples citing people who would themselves fail to meet such criteria can be posited to invalidate a particular test. On the other hand, the formulation of a minimum set of requirements, to act as design objectives for any implementation, can be helpful in order to conjecture that the failure to meet all such requirements would result in the failure of the implementation to be deemed conscious by observers.

The Turing test, as mentioned before, is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. The Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be sapient. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself.

A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machince could be conscious and not be able to pass the Turing test.

People would not deny that Christopher Nolan was conscious when he wrote Under The Eye of The Clock, and yet he is unable to move in a coordinated way or to communicate interactively except through eye movement, and then only with people who understand his particular gestures. The book he wrote has been compared with the works of William Butler Yeats and James Joyce, two other highly acclaimed Irish authors. This shows again that AC must be theoretically capable of achieving mental abilities, and test may fail if it doesn't have enough resources like memory or sensors. If test fails on that reason, then it doesn't prove that an entity is not AC, but also doesn't prove otherwise.

Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, by the Christopher Nolan argument the failure of any particular test would not disprove consciousness, and it is likely that the final test would be conducted in a similar manner to a Turing test, and the result would hence be subjective.

Anticipation, attentiveness and personality are some of the drivers of the artificially intelligent machine, and there may be others upon which an artful simulation of consciousness will depend.

As artificial consciousness models are defined, designs implemented and models deployed and refined, it is argued that a time will come when artificial consciousness will become indistinguishable from consciousness.

Upon considering whether something qualifies to be called conscious, it may be that mere knowledge of its being a machine would disqualify it (from a human perspective) from being deemed conscious. However, one could then argue that it was conscious of its environment, and it was self-aware of its actual nature: an artificial machine (analogous to our knowledge that we are humans). Artificial consciousness proponents therefore have loosened this constraint and allow that simulating or depicting a conscious machine, such as the robots in Star Wars, could count - not as examples of artificial consciousness, since their personalities are generated by actors, but as models. Hence, some consider that if someone were to produce an artificial C-3PO, which behaved just like the real one, but without needing to be controlled and animated by a human, then it could qualify as being artificially conscious. This argument is very questionnable as it doesn't refer who and where loosened that constraint, it's not known that anybody did that.

Finally, an AC system may fail certain tests simply because the system is not developed to the necessary level or doesn't have enough resources; for example, a computer could fail an attentiveness test simply because it does not have enough memory.

Artificial consciousness as a field of study

Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms.

The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so.

Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience.

At this time analog holographic sentience modeled after humans is more like to be a successful approach.

Artificial consciousness in literature and movies

Fictional instances of artificial consciousness:

External links

See also