Revision as of 18:55, 13 March 2004 editPsb777 (talk | contribs)Extended confirmed users9,362 edits this, surely, is the version you should be reverting to.← Previous edit | Latest revision as of 21:10, 17 December 2024 edit undoMontanaMako (talk | contribs)Extended confirmed users647 edits Artificial intelligence boxTag: Visual edit | ||
Line 1: | Line 1: | ||
{{Short description|Field in cognitive science}}{{Artificial intelligence}} | |||
<center>{{msg:disputed}}</center> | |||
'''Artificial consciousness''',<ref>{{cite journal|last1 = Thaler|first1 = S. L.|year = 1998|title = The emerging intelligence and its critical look at us|journal = Journal of Near-Death Studies|volume = 17|issue = 1| pages = 21–29| doi = 10.1023/A:1022990118714|s2cid = 49573301}}</ref> also known as '''machine consciousness''',{{sfn|Gamez|2008}}{{sfn|Reggia|2013}} '''synthetic consciousness''',<ref>{{Cite journal|last1=Smith|first1=David Harris|last2=Schillaci|first2=Guido|date=2021|title=Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness|journal=Frontiers in Psychology|volume=12|page=530560|doi=10.3389/fpsyg.2021.530560|pmid=33967869|pmc=8096926|issn=1664-1078|doi-access=free}}</ref> or '''digital consciousness''',<ref>{{Cite book|last=Elvidge|first=Jim|url=https://books.google.com/books?id=kIqttQEACAAJ|title=Digital Consciousness: A Transformative Vision|date=2018|publisher=John Hunt Publishing Limited|isbn=978-1-78535-760-2|language=en|access-date=2023-06-28|archive-date=2023-07-30|archive-url=https://web.archive.org/web/20230730001838/https://books.google.com/books?id=kIqttQEACAAJ|url-status=live}}</ref> is the ] hypothesized to be possible in ].<ref>{{cite journal|last1=Chrisley|first1=Ron|title=Philosophical foundations of artificial consciousness|journal=Artificial Intelligence in Medicine|date=October 2008|volume=44|issue=2|pages=119–137|doi=10.1016/j.artmed.2008.07.011|pmid=18818062|url=https://www.sciencedirect.com/science/article/abs/pii/S0933365708001000}}</ref> It is also the corresponding field of study, which draws insights from ], ], ] and ]. | |||
An '''artificial consciousness''' (AC) system is an artefact capable of achieving verifiable aspects of ]. | |||
The same terminology can be used with the term "]" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel ]).<ref>{{Cite web |last= |first= |title=The Terminology of Artificial Sentience |url=http://www.sentienceinstitute.org/blog/artificial-sentience-terminology |url-status=live |archive-url=https://web.archive.org/web/20240925040618/https://www.sentienceinstitute.org/blog/artificial-sentience-terminology |archive-date=2024-09-25 |access-date=2023-08-19 |website=Sentience Institute |language=en}}</ref> Since sentience involves the ability to experience ethically positive or negative (i.e., ''valenced'') mental states, it may justify welfare concerns and legal protection, as with animals.<ref name=":2" /> | |||
Consciousness is sometimes defined as self-awareness. Self-awareness is a subjective characteristic which may be difficult to test. Other measures may be easier. For example: Recent work in measuring the consciousness of the ] has determined it manifests aspects of ] which equate to those of a human at the neurological level, and, if attention is deemed a necessary pre-requisite for consciousness, then the fly is claimed to have a lot going for it. | |||
Some ]s believe that consciousness is generated by the interoperation of various parts of the ]; these mechanisms are labeled the ] or NCC. Some further believe that constructing a ] (e.g., a ] system) that can emulate this NCC interoperation would result in a system that is conscious.{{sfn|Graziano|2013}} | |||
It is asserted that one necessary ability of consciousness is the ability to predict external events where it is possible for an average human, i.e. to ''anticipate'' events in order to be ready to respond to them when they occur and to act so that the results can be anticipated. | |||
==Philosophical views== | |||
Arguments calling this assertion into question include: | |||
As there are many hypothesized ], there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of ] that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.<ref>{{Cite journal|last=Block|first=Ned|date=2010|title=On a confusion about a function of consciousness|url=https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/on-a-confusion-about-a-function-of-consciousness/061422BF0C50C5FF00927F9B6E879413|journal=Behavioral and Brain Sciences|language=en|volume=18|issue=2|pages=227–247|doi=10.1017/S0140525X00038188|s2cid=146168066|issn=1469-1825|access-date=2023-06-22|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925040612/https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/abs/on-a-confusion-about-a-function-of-consciousness/061422BF0C50C5FF00927F9B6E879413|url-status=live}}</ref> | |||
#A human in an '''entirely''' strange, confusing situation where anticipation is '''impossible''', could still be entirely conscious. | |||
#It is not necessarily only humans which are conscious. | |||
#Being ''conscious of the past'' makes sense. | |||
#An entirely introspective consciousness unable of anticipation seems possible. | |||
#Being without senses, possibly only temporarily, and therefore being unable to gather information to allow anticipation but being conscious also seems entirely possible. | |||
=== Plausibility debate === | |||
Another reason to doubt the assertion that predictability is a necessary attribute of consciousness can be found on Dr. David M. J. Tax's : | |||
] and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution.<ref>{{Cite journal|last=Block|first=Ned|date=1978|title=Troubles for Functionalism|journal=Minnesota Studies in the Philosophy of Science|pages=261–325}}</ref><ref>{{Cite book|last=Bickle|first=John|url=http://link.springer.com/10.1007/978-94-010-0237-0|title=Philosophy and Neuroscience|date=2003|publisher=Springer Netherlands|isbn=978-1-4020-1302-7|location=Dordrecht|language=en|doi=10.1007/978-94-010-0237-0|access-date=2023-06-24|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925040620/https://link.springer.com/book/10.1007/978-94-010-0237-0|url-status=live}}</ref><ref>{{cite journal|last1 = Schlagel|first1 = R. H.|year = 1999|title = Why not artificial consciousness or thought?|journal = Minds and Machines|volume = 9|issue = 1| pages = 3–28|doi=10.1023/a:1008374714117| s2cid = 28845966}}</ref><ref>{{cite journal|last1 = Searle|first1 = J. R.|year = 1980|title = Minds, brains, and programs|url = http://cogprints.org/7150/1/10.1.1.83.5248.pdf|journal = Behavioral and Brain Sciences|volume = 3|issue = 3|pages = 417–457|doi = 10.1017/s0140525x00005756|s2cid = 55303721|access-date = 2019-01-28|archive-date = 2019-03-17|archive-url = https://web.archive.org/web/20190317230215/http://cogprints.org/7150/1/10.1.1.83.5248.pdf|url-status = live}}</ref> In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," ] says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or ]. A computer, like a washing machine, is a slave operated by its components."<ref>{{Cite journal|last=Buttazzo|first=G.|date=2001|title=Artificial consciousness: Utopia or real possibility?|url=https://ieeexplore.ieee.org/document/933500|journal=Computer|volume=34|issue=7|pages=24–30|doi=10.1109/2.933500|access-date=2024-07-31|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925040556/https://ieeexplore.ieee.org/document/933500/|url-status=live}}</ref> | |||
For other theorists (e.g., ]), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.<ref>{{Cite book|last=Putnam|first=Hilary|title=The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion|publisher=University of Pittsburgh Press|year=1967}}</ref> | |||
:''So can we create a situation/experiment/test, such that when the thing passes this test, we can assume it is conscious? (Not passing the test thus does not say it is NOT conscious!) Most of us agree that the test should check the ability to learn, to predict and to generalize previous experiences.'' | |||
==== Thought experiments ==== | |||
So, failing the test (whatever Dr Tax has in mind) does not show the device is not conscious, even if the test failed was a test of ''predictability''. | |||
] proposed two ] intending to demonstrate that "functionally ]" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.<ref name=":6">{{Cite journal |last=Chalmers |first=David |date=1995 |title=Absent Qualia, Fading Qualia, Dancing Qualia |url=https://consc.net/papers/qualia.html |journal=Conscious Experience}}</ref><ref>{{Cite journal |last=David J. Chalmers |date=2011 |title=A Computational Foundation for the Study of Cognition |url=https://www.ida.liu.se/divisions/hcs/seminars/cogsciseminars/Papers/Chalmers_Computational_foundations.pdf |url-status=live |journal=Journal of Cognitive Science |volume=12 |issue=4 |pages=325–359 |doi=10.17791/JCS.2011.12.4.325 |s2cid=248401010 |archive-url=https://web.archive.org/web/20231123085948/https://www.ida.liu.se/divisions/hcs/seminars/cogsciseminars/Papers/Chalmers_Computational_foundations.pdf |archive-date=2023-11-23 |access-date=2023-06-24}}</ref> | |||
The "fading qualia" is a '']'' thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a ]. Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference. However, if qualia (such as the subjective experience of bright red) were to fade or disappear, the subject would likely notice this change, which causes a contradiction. Chalmers concludes that the fading qualia hypothesis is impossible in practice, and that the resulting robotic brain, once every neurons are replaced, would remain just as sentient as the original biological brain.<ref name=":6" /><ref name=":7">{{Cite web |date=2023-09-30 |title=An Introduction to the Problems of AI Consciousness |url=https://thegradient.pub/an-introduction-to-the-problems-of-ai-consciousness/ |access-date=2024-10-05 |website=The Gradient |language=en}}</ref> | |||
As a field of study, artificial consciousness includes research aiming to create and study such systems in order to understand corresponding natural mechanisms. | |||
Similarly, the "dancing qualia" thought experiment is another ''reductio ad absurdum'' argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).<ref name=":6" /><ref name=":7" /> | |||
Examples of artificial consciousness from literature and movies are: | |||
*] in ]'s ] | |||
*] in ]'s ] | |||
*] | |||
*] | |||
*] | |||
*] in ] | |||
*] in ] | |||
*] in ] | |||
Critics{{who|date=May 2023}} of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization. | |||
Professor ] of ], stated in his book ''Impossible Minds'' (IC Press 1996) that the principles for creating a conscious machine already existed but that it would take forty years to train a machine to ]. This is a controversial statement, given that artificial consciousness is thought by most observers to require ]. Some people deny the very possibility of strong AI; whether or not they are correct, certainly no artificial intelligence of this type has yet been created. | |||
==== Controversies ==== | |||
In 2022, Google engineer Blake Lemoine made a viral claim that Google's ] chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous.<ref>{{cite news|date=14 August 2022|title='I am, in fact, a person': can artificial intelligence ever be sentient?|language=en|work=the Guardian|url=https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions|access-date=5 January 2023|archive-date=25 September 2024|archive-url=https://web.archive.org/web/20240925040732/https://www.theguardian.com/technology/2022/aug/14/can-artificial-intelligence-ever-be-sentient-googles-new-ai-program-is-raising-questions|url-status=live}}</ref> However, while philosopher ] states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain.{{nbsp}} there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."<ref>{{cite news|last1=Leith|first1=Sam|date=7 July 2022|title=Nick Bostrom: How can we be certain a machine isn't conscious?|work=The Spectator|url=https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/|access-date=5 January 2023|archive-date=5 January 2023|archive-url=https://web.archive.org/web/20230105074430/https://www.spectator.co.uk/article/nick-bostrom-how-can-we-be-certain-a-machine-isnt-conscious/|url-status=live}}</ref> | |||
== |
=== Testing === | ||
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the ]. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable.<ref>{{Cite news |last=Véliz |first=Carissa |date=2016-04-14 |title=The Challenge of Determining Whether an A.I. Is Sentient |url=https://slate.com/technology/2016/04/the-challenge-of-determining-whether-an-a-i-is-sentient.html |access-date=2024-10-05 |work=Slate |language=en-US |issn=1091-2339}}</ref><ref>{{Cite book |last=Birch |first=Jonathan |url=https://academic.oup.com/book/57949/chapter/475705460 |title=The Edge of Sentience |date=July 2024 |publisher=Oxford University Press |chapter=Large Language Models and the Gaming Problem}}</ref> Additionally, some chatbots have been trained to say they are not conscious.<ref>{{Cite news |last1=Agüera y Arcas |first1=Blaise |last2=Norvig |first2=Peter |date=October 10, 2023 |title=Artificial General Intelligence Is Already Here |url=https://www.noemamag.com/artificial-general-intelligence-is-already-here/ |work=Noéma}}</ref> | |||
* | |||
A well-known method for testing machine ] is the ], which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.<ref>{{Cite web |last1=Kirk-Giannini |first1=Cameron Domenico |last2=Goldstein |first2=Simon |date=2023-10-16 |title=AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does? |url=https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721 |access-date=2024-08-18 |website=The Conversation |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925040612/https://theconversation.com/ai-is-closer-than-ever-to-passing-the-turing-test-for-intelligence-what-happens-when-it-does-214721 |url-status=live }}</ref> | |||
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments.<ref>{{cite journal|author=Victor Argonov|year=2014|title=Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach|url=http://philpapers.org/rec/ARGMAA-2|journal=Journal of Mind and Behavior|volume=35|pages=51–70|access-date=2016-12-06|archive-date=2016-10-20|archive-url=https://web.archive.org/web/20161020014221/http://philpapers.org/rec/ARGMAA-2|url-status=live}}</ref> He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or ]) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. | |||
=== Ethics === | |||
{{Main|Ethics of artificial intelligence|Machine ethics|Roboethics}} | |||
If it were suspected that a particular machine was conscious, its rights would be an ] issue that would need to be assessed (e.g. what rights it would have under law).<ref>{{Cite news |date=April 10, 2023 |title=Should Robots With Artificial Intelligence Have Moral or Legal Rights? |url=https://www.wsj.com/articles/robots-ai-legal-rights-3c47ef40 |work=The Wall Street Journal}}</ref> For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should ]s be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction. | |||
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness,<ref name=":4">{{Cite book |last=Bostrom |first=Nick |title=Deep utopia: life and meaning in a solved world |date=2024 |publisher=Ideapress Publishing |isbn=978-1-64687-164-3 |location=Washington, DC |page=82}}</ref><ref name=":5">{{Cite journal |last1=Sebo |first1=Jeff |last2=Long |first2=Robert |date=11 December 2023 |title=Moral Consideration for AI Systems by 2030 |url=https://jeffsebo.net/wp-content/uploads/2023/06/moral-consideration-for-ai-systems-by-2030-5.pdf |journal=AI and Ethics|doi=10.1007/s43681-023-00379-1 }}</ref> such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."<ref name=":4" /> | |||
Ethical concerns still apply (although to a lesser extent) ], as long as the probability is deemed non-negligible. The ] is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.<ref name=":5" /><ref name=":2" /> | |||
In 2021, German philosopher ] argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering".<ref>{{cite journal|doi=10.1142/S270507852150003X|title=Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology|year=2021|last1=Metzinger|first1=Thomas|journal=Journal of Artificial Intelligence and Consciousness|volume=08|pages=43–66|s2cid=233176465|doi-access=free}}</ref> David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".<ref name=":3" /> | |||
Enforced amnesia has been proposed as a way to mitigate the risk of silent suffering in locked-in conscious AI and certain AI-adjacent biological systems like ].<ref name="aiamnesia">{{Cite journal|last1=Tkachenko|first1=Yegor|year=2024|title=Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI|journal=Proceedings of the 41st International Conference on Machine Learning|url=https://openreview.net/forum?id=nACGn4US1R|access-date=2024-06-11|publisher=PMLR|language=en|archive-date=2024-06-10|archive-url=https://web.archive.org/web/20240610171754/https://openreview.net/forum?id=nACGn4US1R|url-status=live}}</ref> | |||
== Aspects of consciousness == | |||
] and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious.{{sfn|Baars|1995}} The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. ] suggested 12 principles for artificial consciousness:<ref>{{Cite book|last=Aleksander|first=Igor|title=From Natural to Artificial Neural Computation|chapter=Artificial neuroconsciousness an update|date=1995|editor-last=Mira|editor-first=José|editor2-last=Sandoval|editor2-first=Francisco|chapter-url=https://link.springer.com/chapter/10.1007/3-540-59497-3_224|series=Lecture Notes in Computer Science|volume=930|language=en|location=Berlin, Heidelberg|publisher=Springer|pages=566–583|doi=10.1007/3-540-59497-3_224|isbn=978-3-540-49288-7|access-date=2023-06-22|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925040733/https://link.springer.com/chapter/10.1007/3-540-59497-3_224|url-status=live}}</ref> the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered. | |||
=== Subjective experience === | |||
Some philosophers, such as ], use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Although some authors use the word sentience to refer exclusively to ''valenced'' (ethically positive or negative) subjective experiences, like pleasure or suffering.<ref name=":3">{{Cite news |last=Chalmers |first=David J. |date=August 9, 2023 |title=Could a Large Language Model Be Conscious? |url=https://www.bostonreview.net/articles/could-a-large-language-model-be-conscious/ |work=Boston Review}}</ref> Explaining why and how subjective experience arises is known as the ].<ref>{{Cite web |last=Seth |first=Anil |title=Consciousness |url=https://www.newscientist.com/definition/consciousness/ |access-date=2024-09-05 |website=New Scientist |language=en-US |archive-date=2024-09-14 |archive-url=https://web.archive.org/web/20240914003058/https://www.newscientist.com/definition/consciousness/ |url-status=live }}</ref> AI sentience would give rise to concerns of welfare and legal protection,<ref name=":2">{{Cite magazine |last=Kateman |first=Brian |date=2023-07-24 |title=AI Should Be Terrified of Humans |url=https://time.com/6296234/ai-should-be-terrified-of-humans/ |access-date=2024-09-05 |magazine=TIME |language=en |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925041601/https://time.com/6296234/ai-should-be-terrified-of-humans/ |url-status=live }}</ref> whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.<ref>{{Cite web |last=Nosta |first=John |date=December 18, 2023 |title=Should Artificial Intelligence Have Rights? |url=https://www.psychologytoday.com/us/blog/the-digital-self/202312/should-artificial-intelligence-have-rights |access-date=2024-09-05 |website=Psychology Today |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925041049/https://www.psychologytoday.com/us/blog/the-digital-self/202312/should-artificial-intelligence-have-rights |url-status=live }}</ref> | |||
=== Awareness === | |||
] could be one required aspect, but there are many problems with the exact definition of ''awareness''. The results of the experiments of ] suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined,{{clarify|date=June 2023}} and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities. | |||
There are at least three types of awareness:<ref>Joëlle Proust in ''Neural Correlates of Consciousness'', Thomas Metzinger, 2000, MIT, pages 307–324</ref> agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it. | |||
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.<ref>Christof Koch, ''The Quest for Consciousness'', 2004, page 2 footnote 2</ref> | |||
=== Memory === | |||
Conscious events interact with ] systems in learning, rehearsal, and retrieval.<ref>Tulving, E. 1985. Memory and consciousness. Canadian Psychology 26:1–12</ref> | |||
The ]<ref>Franklin, Stan, et al. "The role of consciousness in memory." Brains, Minds and Media 1.1 (2005): 38.</ref> elucidates the role of consciousness in the updating of perceptual memory,<ref>Franklin, Stan. "Perceptual memory and learning: Recognizing, categorizing, and relating." Proc. Developmental Robotics AAAI Spring Symp. 2005.</ref> transient ], and ]. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system.<ref>Shastri, L. 2002. Episodic memory and cortico-hippocampal interactions. Trends in Cognitive Sciences</ref> In IDA, these two memories are implemented computationally using a modified version of ]’s ] architecture.<ref>Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.</ref> | |||
=== Learning === | |||
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events.{{sfn|Baars|1995}} Per ] and Luis Jiménez, learning is defined as "a set of philogenetically{{sic}} advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".<ref>{{Cite web|title=Implicit Learning and Consciousness: An Empirical, Philosophical and Computational Consensus in the Making|url=https://www.routledge.com/Implicit-Learning-and-Consciousness-An-Empirical-Philosophical-and-Computational/Cleeremans-French/p/book/9781138877412|access-date=2023-06-22|website=Routledge & CRC Press|language=en|archive-date=2023-06-22|archive-url=https://web.archive.org/web/20230622223246/https://www.routledge.com/Implicit-Learning-and-Consciousness-An-Empirical-Philosophical-and-Computational/Cleeremans-French/p/book/9781138877412|url-status=live}}</ref> | |||
=== Anticipation === | |||
The ability to predict (or ]) foreseeable events is considered important for artificial intelligence by ].<ref name="Aleksander 1995">Aleksander 1995</ref> The emergentist ] proposed by ] in '']'' may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities. | |||
Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events.<ref name="Aleksander 1995" /> An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world. | |||
== Functionalist theories of consciousness == | |||
] is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships.<ref>{{Cite web |title=Functionalism |url=https://plato.stanford.edu/entries/functionalism/ |website=Stanford Encyclopedia of Philosophy |access-date=2024-09-08 |archive-date=2021-04-18 |archive-url=https://web.archive.org/web/20210418140903/http://plato.stanford.edu/entries/functionalism/ |url-status=live }}</ref> Functionalism is particularly popular among philosophers.<ref>{{Cite web |date=2020 |title=Survey Results {{!}} Consciousness: identity theory, panpsychism, eliminativism, dualism, or functionalism? |url=https://survey2020.philpeople.org/survey/results/5010 |website=PhilPapers}}</ref> | |||
A 2023 study suggested that current ] probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.<ref>{{Cite arXiv |date=2023 |title=Consciousness in Artificial Intelligence: Insights from the Science of Consciousness |eprint=2308.08708 |last1=Butlin |first1=Patrick |last2=Long |first2=Robert |last3=Elmoznino |first3=Eric |last4=Bengio |first4=Yoshua |last5=Birch |first5=Jonathan |last6=Constant |first6=Axel |last7=Deane |first7=George |last8=Fleming |first8=Stephen M. |last9=Frith |first9=Chris |last10=Ji |first10=Xu |last11=Kanai |first11=Ryota |last12=Klein |first12=Colin |last13=Lindsay |first13=Grace |last14=Michel |first14=Matthias |last15=Mudrik |first15=Liad |last16=Peters |first16=Megan A. K. |last17=Schwitzgebel |first17=Eric |last18=Simon |first18=Jonathan |last19=VanRullen |first19=Rufin |class=cs.AI }}</ref> | |||
=== Global workspace theory === | |||
{{Main|Global workspace theory}} | |||
This theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage. The brain contains many specialized processes or modules (such as those for vision, language, or memory) that operate in parallel, much of which is unconscious. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules. For example, when reading a word, the visual module recognizes the letters, the language module interprets the meaning, and the memory module might recall associated information – all coordinated through the global workspace.<ref>{{cite book |last1=Baars |first1=Bernard J. |url=http://cogweb.ucla.edu/Abstracts/Baars_88.html |title=A Cognitive Theory of Consciousness |date=1988 |publisher=Cambridge University Press |isbn=0521427436 |page=345 |access-date=2024-09-05 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925041604/http://cogweb.ucla.edu/Abstracts/Baars_88.html |url-status=live }}</ref><ref name=":1">{{Cite web |last=Travers |first=Mark |date=October 11, 2023 |title=Are We Ditching the Most Popular Theory of Consciousness? |url=https://www.psychologytoday.com/us/blog/social-instincts/202310/are-we-ditching-the-most-popular-theory-of-consciousness |access-date=2024-09-05 |website=Psychology Today |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925041550/https://www.psychologytoday.com/us/blog/social-instincts/202310/are-we-ditching-the-most-popular-theory-of-consciousness |url-status=live }}</ref> | |||
=== Higher-order theories of consciousness === | |||
{{Main|Higher-order theories of consciousness}} | |||
Higher-order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher-order representation, such as a thought or perception about that state. These theories argue that consciousness arises from a relationship between lower-order mental states and higher-order awareness of those states. There are several variations, including higher-order thought (HOT) and higher-order perception (HOP) theories.<ref name="Carruthers2011">{{cite web |author1= |date=15 Aug 2011 |title=Higher-Order Theories of Consciousness |url=http://plato.stanford.edu/entries/consciousness-higher/ |access-date=31 August 2014 |website=Stanford Encyclopedia of Philosophy |archive-date=14 May 2008 |archive-url=https://web.archive.org/web/20080514053751/http://plato.stanford.edu/entries/consciousness-higher/ |url-status=live }}</ref><ref name=":1" /> | |||
=== Attention schema theory === | |||
{{Main|Attention schema theory}} | |||
In 2011, ] and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema.<ref>{{cite journal |last1=Graziano |first1=Michael |date=1 January 2011 |title=Human consciousness and its relationship to social neuroscience: A novel hypothesis |journal=Cognitive Neuroscience |volume=2 |issue=2 |pages=98–113 |doi=10.1080/17588928.2011.565121 |pmc=3223025 |pmid=22121395}}</ref> Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain".{{sfn|Graziano|2013}} This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body.{{sfn|Graziano|2013}} This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain. | |||
==Implementation proposals== | |||
{{See also|Cognitive architecture}} | |||
===Symbolic or hybrid=== | |||
====Learning Intelligent Distribution Agent==== | |||
{{main|LIDA (cognitive architecture)}} | |||
] created a cognitive architecture called ] that implements ]'s theory of consciousness called the ]. It relies heavily on ''codelets'', which are "special purpose, relatively independent, mini-agent typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.<ref>{{Cite journal |last=Franklin |first=Stan |date=January 2003 |title=IDA: A conscious artifact? |url=https://www.researchgate.net/publication/233597270 |journal=Journal of Consciousness Studies |access-date=2024-08-25 |archive-date=2020-07-03 |archive-url=https://web.archive.org/web/20200703105206/https://www.researchgate.net/publication/233597270_IDA_A_conscious_artifact |url-status=live }}</ref><ref name=":02">{{Cite journal |last1=J. Baars |first1=Bernard |last2=Franklin |first2=Stan |date=2009 |title=Consciousness is computational: The Lida model of global workspace theory |url=https://philpapers.org/rec/BERCIC |journal=International Journal of Machine Consciousness|volume=01 |pages=23–32 |doi=10.1142/S1793843009000050 }}</ref> | |||
====CLARION cognitive architecture==== | |||
{{main|CLARION (cognitive architecture)}} | |||
The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.<ref>{{Harv|Sun|2002}}</ref> | |||
====OpenCog==== | |||
] made an embodied AI through the open-source ] project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the ]. | |||
===Connectionist=== | |||
====Haikonen's cognitive architecture==== | |||
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve ] and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special ] to reproduce the processes of ], ], ], ], ], ] and the ] functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the ]s, without ] or ]". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."<ref>{{Cite book|last=Haikonen|first=Pentti O.|title=The cognitive approach to conscious machines|date=2003|publisher=Imprint Academic|isbn=978-0-907845-42-3|location=Exeter}}</ref><ref>{{Cite web|date=2019-09-08|title=Pentti Haikonen's architecture for conscious machines – Raúl Arrabales Moreno|url=https://www.conscious-robots.com/2009/12/10/pentti-haikonens-architecture-for-conscious-machines/|access-date=2023-06-24|language=en-US|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925042108/https://www.conscious-robots.com/2009/12/10/pentti-haikonens-architecture-for-conscious-machines/|url-status=live}}</ref> | |||
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in ]s that have a suitable neuro-inspired architecture of complexity; these are shared by many.<ref>{{Cite book|last=Freeman|first=Walter J.|title=How brains make up their minds|date=2000|publisher=Columbia University Press|isbn=978-0-231-12008-1|series=Maps of the mind|location=New York Chichester, West Sussex}}</ref><ref>{{Cite journal|last=Cotterill|first=Rodney M J|date=2003|title=CyberChild - A simulation test-bed for consciousness studies|url=https://orbit.dtu.dk/en/publications/cyberchild-a-simulation-test-bed-for-consciousness-studies|journal=Journal of Consciousness Studies|volume=10|issue=4–5|pages=31–45|issn=1355-8250|access-date=2023-06-22|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925041553/https://orbit.dtu.dk/en/publications/cyberchild-a-simulation-test-bed-for-consciousness-studies|url-status=live}}</ref> A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.<ref>{{Cite book|last1=Haikonen|first1=Pentti O.|title=Consciousness and robot sentience|last2=Haikonen|first2=Pentti Olavi Antero|date=2012|publisher=World Scientific|isbn=978-981-4407-15-1|series=Series on machine consciousness|location=Singapore}}</ref><ref>{{Cite book|last=Haikonen|first=Pentti O.|title=Consciousness and robot sentience|date=2019|publisher=World Scientific|isbn=978-981-12-0504-0|edition=2nd|series=Series on machine consciousness|location=Singapore Hackensack, NJ London}}</ref> | |||
====Shanahan's cognitive architecture==== | |||
] describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").<ref>{{Cite journal|last=Shanahan|first=Murray|date=2006|title=A cognitive architecture that combines internal simulation with a global workspace|url=https://pubmed.ncbi.nlm.nih.gov/16384715|journal=Consciousness and Cognition|volume=15|issue=2|pages=433–449|doi=10.1016/j.concog.2005.11.005|issn=1053-8100|pmid=16384715|s2cid=5437155|access-date=2023-06-24|archive-date=2023-02-10|archive-url=https://web.archive.org/web/20230210043751/https://pubmed.ncbi.nlm.nih.gov/16384715/|url-status=live}}</ref>{{sfn|Gamez|2008}}{{sfn|Reggia|2013}}<ref>{{Cite book|last1=Haikonen|first1=Pentti O.|title=Consciousness and robot sentience|last2=Haikonen|first2=Pentti Olavi Antero|date=2012|publisher=World Scientific|isbn=978-981-4407-15-1|series=Series on machine consciousness|location=Singapore|chapter=chapter 20}}</ref> | |||
====Creativity Machine==== | |||
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI),<ref>Thaler, S.L., ""</ref><ref>{{cite journal|last1 = Marupaka|first1 = N.|last2 = Lyer|first2 = L.|last3 = Minai|first3 = A.|year = 2012|title = Connectivity and thought: The influence of semantic network structure in a neurodynamical model of thinking|url = http://www.ece.uc.edu/~aminai/papers/marupaka_creativity_NN12.pdf|journal = Neural Networks|volume = 32|pages = 147–158|doi = 10.1016/j.neunet.2012.02.004|pmid = 22397950|access-date = 2015-05-22|archive-url = https://web.archive.org/web/20161219210132/http://www.ece.uc.edu/~aminai/papers/marupaka_creativity_NN12.pdf|archive-date = 2016-12-19|url-status = dead}}</ref><ref>Roque, R. and Barreira, A. (2011). "O Paradigma da "Máquina de Criatividade" e a Geração de Novidades em um Espaço Conceitual," 3º Seminário Interno de Cognição Artificial – SICA 2011 – FEEC – UNICAMP.</ref> or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or ] that may qualify as potential ideas or strategies.<ref>{{Cite book|doi=10.1007/0-387-28898-8_4|chapter = Mistake Making Machines|title = Systemics of Emergence: Research and Development| url=https://archive.org/details/systemicsemergen00mina| url-access=limited| pages=–78|year = 2006|last1 = Minati|first1 = Gianfranco| last2=Vitiello| first2=Giuseppe| isbn=978-0-387-28899-4}}</ref> He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity.<ref>Thaler, S. L. (2013) {{Webarchive|url=https://web.archive.org/web/20160429083248/http://www.springerreference.com/docs/html/chapterdbid/358097.html |date=2016-04-29 }}, (ed.) E.G. Carayannis, Springer Science+Business Media</ref><ref name="APA">Thaler, S. L. (2011). "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers</ref><ref>{{cite journal|last1 = Thaler|first1 = S. L.|year = 2014|title = Synaptic Perturbation and Consciousness|journal = Int. J. Mach. Conscious|volume = 6| issue = 2| pages = 75–107| doi = 10.1142/S1793843014400137}}</ref> Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.<ref name="APA" /><ref>{{cite journal|last1 = Thaler|first1 = S. L.|year = 1995|title = "Virtual Input Phenomena" Within the Death of a Simple Pattern Associator|journal = Neural Networks|volume = 8|issue = 1| pages = 55–65|doi=10.1016/0893-6080(94)00065-t}}</ref><ref>Thaler, S. L. (1995). Death of a gedanken creature, ''Journal of Near-Death Studies'', 13(3), Spring 1995</ref><ref>Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.</ref><ref>Mayer, H. A. (2004). {{Webarchive|url=https://web.archive.org/web/20150708145744/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.89.5420&rep=rep1&type=pdf |date=2015-07-08 }}, Systems, Man and Cybernetics, 2004 IEEE International Conference(Volume:6 )</ref> | |||
==== "Self-modeling" ==== | |||
] defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or ].<ref>{{Cite web|last=Pavlus|first=John|title=Curious About Consciousness? Ask the Self-Aware Machines|url=https://www.quantamagazine.org/hod-lipson-is-building-self-aware-robots-20190711/|access-date=2021-01-06|website=Quanta Magazine|date=11 July 2019|language=en|archive-date=2021-01-17|archive-url=https://web.archive.org/web/20210117100029/https://www.quantamagazine.org/hod-lipson-is-building-self-aware-robots-20190711/|url-status=live}}</ref><ref>Bongard, Josh, Victor Zykov, and Hod Lipson. "." Science 314.5802 (2006): 1118–1121.</ref> | |||
== In fiction == | |||
{{See also|Simulated consciousness in fiction|Artificial intelligence in fiction}} | |||
In ], the spaceship's sentient supercomputer, ] was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to ]. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.<ref name=":0">{{Cite web |last=Wodinsky |first=Shoshana |date=2022-06-18 |title=The 11 Best (and Worst) Sentient Robots From Sci-Fi |url=https://gizmodo.com/sentient-bots-2001-blade-runner-wall-e-terminator-blade-1849071420 |access-date=2024-08-17 |website=Gizmodo |language=en-US |archive-date=2023-11-13 |archive-url=https://web.archive.org/web/20231113072833/https://gizmodo.com/sentient-bots-2001-blade-runner-wall-e-terminator-blade-1849071420 |url-status=live }}</ref><ref>{{Cite web |last=Sokolowski |first=Rachael |date=2024-05-01 |title=Star Gazing |url=https://www.scotsmanguide.com/residential/star-gazing/ |access-date=2024-08-17 |website=Scotsman Guide |language=en-US |archive-date=2024-08-17 |archive-url=https://web.archive.org/web/20240817172726/https://www.scotsmanguide.com/residential/star-gazing/ |url-status=live }}</ref> | |||
In Arthur C. Clarke's ], Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness. | |||
In ], human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.<ref>{{Cite news |last1=Bloom |first1=Paul |last2=Harris |first2=Sam |date=2018-04-23 |title=Opinion {{!}} It's Westworld. What's Wrong With Cruelty to Robots? |url=https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html |access-date=2024-08-17 |work=The New York Times |language=en-US |issn=0362-4331 |archive-date=2024-08-17 |archive-url=https://web.archive.org/web/20240817172726/https://www.nytimes.com/2018/04/23/opinion/westworld-conscious-robots-morality.html |url-status=live }}</ref><ref name=":0" /> | |||
In ]'s short story ], a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards ], adults undergo a surgery to give control of the body to the jewel and remove the brain. The main character, before the surgery, endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.<ref>{{Cite book |last=Egan |first=Greg |title=Learning to Be Me |date=July 1990 |publisher=TTA Press}}</ref><ref>{{Cite web |last=Shah |first=Salik |date=2020-04-08 |title=Why Greg Egan Is Science Fiction's Next Superstar |url=https://reactormag.com/why-greg-egan-is-science-fictions-next-superstar/ |access-date=2024-08-17 |website=Reactor |language=en-US |archive-date=2024-05-16 |archive-url=https://web.archive.org/web/20240516174127/https://reactormag.com/why-greg-egan-is-science-fictions-next-superstar/ |url-status=live }}</ref> | |||
==See also== | |||
{{div col|colwidth=30em}} | |||
* '''General fields and theories''' | |||
** {{Annotated link|Artificial intelligence}} | |||
*** ] (AGI) – some consider AC a subfield of AGI research | |||
**** ] – what may happen when an AGI redesigns itself in iterative cycles | |||
** {{Annotated link|Brain–computer interface}} | |||
** {{Annotated link|Cognitive architecture}} | |||
** ] – the area of philosophy in which AI ponder their own place in the world | |||
** {{Annotated link|Computational theory of mind}} | |||
*** {{Annotated link|Consciousness in animals}} | |||
*** {{Annotated link|Simulated consciousness (science fiction)}} | |||
** {{Annotated link|Hardware for artificial intelligence}} | |||
** {{Annotated link|Identity of indiscernibles}} | |||
** {{Annotated link|Mind uploading}} | |||
** {{Annotated link|Neurotechnology}} | |||
** {{Annotated link|Philosophy of mind}} | |||
** {{Annotated link|Quantum cognition}} | |||
** {{Annotated link|Simulated reality}} | |||
* '''Proposed concepts and implementations''' | |||
** {{Annotated link|Attention schema theory}} | |||
** ] and ] by ] | |||
** ] – conceptual prototype | |||
** {{Annotated link|Copycat (software)|Copycat (cognitive architecture)}} | |||
** {{Annotated link|Global workspace theory}} | |||
** ] – avoid oversimplifying anything essential | |||
** {{Annotated link|Hallucination (artificial intelligence)}} | |||
** ] – spatial patterns | |||
** {{Annotated link|Kismet (robot)}} | |||
** {{Annotated link|LIDA (cognitive architecture)}} | |||
** {{Annotated link|Memory-prediction framework}} | |||
** {{Annotated link|Omniscience}} | |||
** {{Annotated link|Psi-theory}} | |||
** {{Annotated link|Quantum mind}} | |||
** {{Annotated link|Self-awareness}} | |||
{{div col end}} | |||
==References== | |||
===Citations=== | |||
{{reflist|1=2}} | |||
===Bibliography=== | |||
{{refbegin|2}} | |||
* {{Citation|last=Aleksander|first=Igor|year=1995|title=Artificial Neuroconsciousness: An Update|publisher=IWANN|url=http://www.ee.ic.ac.uk/research/neural/publications/iwann.html|archive-url=https://web.archive.org/web/19970302014628/http://www.ee.ic.ac.uk/research/neural/publications/iwann.html|archive-date=1997-03-02|url-status=dead}} | |||
* {{Citation|last=Armstrong|first=David|year=1968|title=A Materialist Theory of Mind|publisher=Routledge}} | |||
* {{Citation|last=Arrabales|first=Raul|year=2009|title=Establishing a Roadmap and Metrics for Conscious Machines Development|journal=Proceedings of the 8th IEEE International Conference on Cognitive Informatics|pages=94–101|url=http://www.conscious-robots.com/raul/papers/Arrabales_ICCI09_preprint.pdf|place=Hong Kong|url-status=dead|archive-url=https://web.archive.org/web/20110721234802/http://www.conscious-robots.com/raul/papers/Arrabales_ICCI09_preprint.pdf|archive-date=2011-07-21}} | |||
* {{Citation|last=Baars|first=Bernard J.|year=1995|title=A cognitive theory of consciousness|publisher=Cambridge University Press|isbn=978-0-521-30133-6|edition=Reprinted|location=Cambridge}} | |||
* {{Citation|last=Baars|first=Bernard J.|year=1997|title=In the Theater of Consciousness|place=New York, NY|publisher=Oxford University Press|isbn=978-0-19-510265-9|url=https://archive.org/details/intheaterofconsc00baar}} | |||
* {{Citation|last=Bickle|first=John|year=2003|title=Philosophy and Neuroscience: A Ruthless Reductive Account|place=New York, NY|publisher=Springer-Verlag}} | |||
* {{Citation|last=Block|first=Ned|year=1978|title=Troubles for Functionalism|journal=Minnesota Studies in the Philosophy of Science 9: 261–325}} | |||
* {{Citation|last=Block|first=Ned|year=1997|title=On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates|publisher=MIT Press}} | |||
* {{Citation|last=Boyles|first=Robert James M.|year=2012|title=Artificial Qualia, Intentional Systems and Machine Consciousness|publisher=Proceedings of the Research@DLSU Congress 2012: Science and Technology Conference|url=http://philpapers.org/archive/BOYAQI.pdf|issn=2012-3477|access-date=2016-09-09|archive-date=2016-10-11|archive-url=https://web.archive.org/web/20161011172417/http://philpapers.org/archive/BOYAQI.pdf|url-status=live}} | |||
* {{Citation|last=Chalmers|first=David|year=1996|title=The Conscious Mind|publisher=Oxford University Press|isbn=978-0-19-510553-7|url=https://archive.org/details/consciousmindins00chal}} | |||
* {{Citation|last=Chalmers|first=David|year=2011|title=A Computational Foundation for the Study of Cognition|journal=Journal of Cognitive Science|pages=323–357|url=http://j-cs.org/gnuboard/bbs/download.php?bo_table=__vol012i4&wr_id=1&no=0|place=Seoul Republic of Korea|url-status=dead|archive-url=https://web.archive.org/web/20151223105456/http://j-cs.org/gnuboard/bbs/download.php?bo_table=__vol012i4&wr_id=1&no=0|archive-date=2015-12-23}} | |||
* {{Citation|last=Cleeremans|first=Axel|year=2001|title=Implicit learning and consciousness|url=http://srsc.ulb.ac.be/axcWWW/papers/pdf/01-AXCLJ.pdf|access-date=2004-11-30|archive-date=2012-09-07|archive-url=https://web.archive.org/web/20120907072959/http://srsc.ulb.ac.be/axcwww/papers/pdf/01-AXCLJ.pdf|url-status=dead}} | |||
* {{Citation|last=Cotterill|first=Rodney|year=2003|contribution=Cyberchild: a Simulation Test-Bed for Consciousness Studies|title=Machine Consciousness|volume=10|issue=4–5|pages=31–45|editor-first=Owen|editor-last=Holland|publisher=Imprint Academic|place=Exeter, UK|url=https://www.ingentaconnect.com/content/imp/jcs/2003/00000010/F0020004/1345|access-date=2018-11-22|archive-date=2018-11-22|archive-url=https://web.archive.org/web/20181122092426/https://www.ingentaconnect.com/content/imp/jcs/2003/00000010/F0020004/1345|url-status=live}} | |||
* {{Citation|last=Doan|first=Trung|year=2009|title=Pentti Haikonen's architecture for conscious machines|url=http://www.conscious-robots.com/en/conscious-machines/theories-of-consciousness/pentti-haikonens-architecture-for-conscious-mac.html|url-status=dead|archive-url=https://web.archive.org/web/20091215095351/http://www.conscious-robots.com/en/conscious-machines/theories-of-consciousness/pentti-haikonens-architecture-for-conscious-mac.html|archive-date=2009-12-15}} | |||
* {{Citation|last=Ericsson-Zenith|first=Steven|year=2010|title=Explaining Experience In Nature|url=http://iase.info|location=Sunnyvale, CA|publisher=Institute for Advanced Science & Engineering|access-date=2019-10-04|archive-url=https://web.archive.org/web/20190401215500/https://www.iase.info/|archive-date=2019-04-01|url-status=dead}} | |||
* {{Citation|last=Franklin|first=Stan|year=1995|title=Artificial Minds|place=Boston, MA|publisher=MIT Press|isbn=978-0-262-06178-0|url-access=registration|url=https://archive.org/details/artificialminds0000fran}} | |||
* {{Citation|last=Franklin|first=Stan|year=2003|contribution=IDA: A Conscious Artefact|title=Machine Consciousness|editor-first=Owen|editor-last=Holland|place=Exeter, UK|publisher=Imprint Academic}} | |||
* {{Citation|last=Freeman|first=Walter|year=1999|title=How Brains make up their Minds|publisher=Phoenix|place=London, UK|isbn=978-0-231-12008-1}} | |||
* {{Citation|last=Gamez|first=David|year=2008|title=Progress in machine consciousness|journal=Consciousness and Cognition|volume=17|issue=3|doi=10.1016/j.concog.2007.04.005|pmid=17572107|pages=887–910|s2cid=3569852 }} | |||
* {{Citation|last=Graziano|first=Michael|year=2013|title=Consciousness and the Social Brain|publisher=Oxford University Press|isbn=978-0199928644}} | |||
* {{Citation|last=Haikonen|first=Pentti|year=2003|title=The Cognitive Approach to Conscious Machines|publisher=Imprint Academic|place=Exeter, UK|isbn=978-0-907845-42-3|url-access=registration|url=https://archive.org/details/cognitiveapproac0000haik}} | |||
* {{Citation|last=Haikonen|first=Pentti|year=2012|title=Consciousness and Robot Sentience|publisher=World Scientific|place=Singapore|isbn=978-981-4407-15-1}} | |||
* {{Citation|last=Haikonen|first=Pentti|year=2019|title=Consciousness and Robot Sentience: 2nd Edition|publisher=World Scientific|place=Singapore|isbn=978-981-120-504-0}} | |||
* {{Citation|last=Koch|first=Christof|year=2004|title=The Quest for Consciousness: A Neurobiological Approach|publisher=Roberts & Company Publishers|place=Pasadena, CA|isbn=978-0-9747077-0-9}} | |||
* {{Citation|last=Lewis|first=David|year=1972|title=Psychophysical and theoretical identifications|journal=Australasian Journal of Philosophy|volume=50|issue=3|pages=249–258|doi=10.1080/00048407212341301}} | |||
* {{Citation|last=Putnam|first=Hilary|year=1967|title=The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion|publisher=University of Pittsburgh Press}} | |||
* {{Citation|last=Reggia|first=James|year=2013|title=The rise of machine consciousness: Studying consciousness with computational models|journal=Neural Networks|volume=44|doi=10.1016/j.neunet.2013.03.011|pmid=23597599|pages=112–131}} | |||
* {{Citation|last1=Rushby|first1=John|last2=Sanchez|first2=Daniel|year=2017|title=Technology and Consciousness Workshops Report|publisher=SRI International|place=Menlo Park, CA|url=http://www.afutureworththinkingabout.com/wp-content/uploads/2019/03/2017SRItechConsc2019ReportFINAL.pdf|access-date=2022-03-28|archive-date=2024-09-25|archive-url=https://web.archive.org/web/20240925042106/http://www.afutureworththinkingabout.com/wp-content/uploads/2019/03/2017SRItechConsc2019ReportFINAL.pdf|url-status=live}} | |||
* {{Citation|last1=Sanz|first1=Ricardo|last2=López|first2=I|last3=Rodríguez|first3=M|last4=Hernández|first4=C|year=2007|title=Principles for consciousness in integrated cognitive control|journal=Neural Networks|volume=20|doi=10.1016/j.neunet.2007.09.012|pages=938–946|pmid=17936581|issue=9|url=http://cogprints.org/5941/1/ASLAB%2DA%2D2007%2D011.pdf|access-date=2018-04-20|archive-date=2017-09-22|archive-url=https://web.archive.org/web/20170922010550/http://cogprints.org/5941/1/ASLAB%2DA%2D2007%2D011.pdf|url-status=live}} | |||
* {{Citation|last=Searle|first=John|year=2004|title=Mind: A Brief Introduction|publisher=Oxford University Press}} | |||
* {{Citation|last=Shanahan|first=Murray|year=2006|title=A cognitive architecture that combines internal simulation with a global workspace|journal=Consciousness and Cognition|volume=15|issue=2|doi=10.1016/j.concog.2005.11.005|pmid=16384715|pages=443–449|s2cid=5437155 }} | |||
* {{Citation|last=Sun|first=Ron |title=Accounting for the computational basis of consciousness: A connectionist approach|journal=Consciousness and Cognition|volume=8|date=December 1999|doi= 10.1006/ccog.1999.0405|pages=529–565|pmid=10600249|issue=4|citeseerx=10.1.1.42.2681 |s2cid=15784914 }} | |||
* {{Citation|last=Sun|first=Ron|year=2001|title=Computation, reduction, and teleology of consciousness|journal=Cognitive Systems Research|volume=1|pages=241–249|doi=10.1016/S1389-0417(00)00013-9|issue= 4|citeseerx=10.1.1.20.8764|s2cid=36892947 }} | |||
* {{cite book|last1=Sun|first1=Ron|title=Duality of the Mind: A Bottom-up Approach Toward Cognition|date=2002|publisher=Psychology Press|isbn=978-1-135-64695-0|url=https://books.google.com/books?id=3vZ5AgAAQBAJ}} | |||
* {{Cite book|last1=Takeno|first1=Junichi|last2=Inaba|first2=K|last3=Suzuki|first3=T |title=2005 International Symposium on Computational Intelligence in Robotics and Automation|chapter=Experiments and examination of mirror image cognition using a small robot|pages=493–498|publisher=CIRA 2005|date=June 27–30, 2005|doi=10.1109/CIRA.2005.1554325|place=Espoo Finland|isbn=978-0-7803-9355-4|s2cid=15400848 }} | |||
{{refend}} | |||
==Further reading== | |||
* {{cite book|last=Aleksander|first=Igor|chapter=Machine Consciousness|doi=10.1002/9781119132363.ch7|editor1-last=Schneider|editor1-first=Susan|editor2-last=Velmans|editor2-first=Max|editor1-link=Susan Schneider|editor2-link=Max Velmans|title=The Blackwell Companion to Consciousness|date=2017|pages=93–105|publisher=Wiley-Blackwell|isbn=978-0-470-67406-2|edition=2nd}} | |||
* {{cite journal|last1 = Baars|first1 = Bernard|last2 = Franklin|first2 = Stan|year = 2003|title = How conscious experience and working memory interact|url = http://cogprints.org/5854/1/TICSarticle2003.pdf| journal = Trends in Cognitive Sciences|volume = 7|issue = 4| pages = 166–172|doi=10.1016/s1364-6613(03)00056-1| pmid = 12691765|s2cid = 14185056 }} | |||
* Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group, 1998 | |||
* Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. . Brains, Minds and Media 1: 1–38, pdf. | |||
* Haikonen, Pentti (2004), ''Conscious Machines and Machine Emotions'', presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004. | |||
* McCarthy, John (1971–1987), . Stanford University, 1971–1987. | |||
* Penrose, Roger, ], 1989. | |||
* Sternberg, Eliezer J. (2007) ''Are You a Machine?: The Brain, the Mind, And What It Means to be Human.'' Amherst, NY: Prometheus Books. | |||
* Suzuki T., Inaba K., Takeno, Junichi (2005),'' Conscious Robot That '''Distinguishes Between Self and Others''' and Implements Imitation Behavior'', ('''Best Paper of IEA/AIE2005'''), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005. | |||
* Takeno, Junichi (2006), , HRI Press, August 2006. | |||
* Zagal, J.C., Lipson, H. (2009) "", Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009. | |||
==External links== | |||
* | |||
* | |||
* , Machine Consciousness and Conscious Robots Portal. | |||
* , artificial consciousness article in . | |||
* , Daniel Dennett's multiple drafts model. | |||
* , Generality in Artificial Intelligence by John McCarthy. | |||
{{Consciousness}} | |||
] | |||
] | |||
] |
Latest revision as of 21:10, 17 December 2024
Field in cognitive sciencePart of a series on |
Artificial intelligence |
---|
Major goals |
Approaches |
Applications |
Philosophy |
History |
Glossary |
Artificial consciousness, also known as machine consciousness, synthetic consciousness, or digital consciousness, is the consciousness hypothesized to be possible in artificial intelligence. It is also the corresponding field of study, which draws insights from philosophy of mind, philosophy of artificial intelligence, cognitive science and neuroscience.
The same terminology can be used with the term "sentience" instead of "consciousness" when specifically designating phenomenal consciousness (the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with animals.
Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness or NCC. Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious.
Philosophical views
As there are many hypothesized types of consciousness, there are many potential implementations of artificial consciousness. In the philosophical literature, perhaps the most common taxonomy of consciousness is into "access" and "phenomenal" variants. Access consciousness concerns those aspects of experience that can be apprehended, while phenomenal consciousness concerns those aspects of experience that seemingly cannot be apprehended, instead being characterized qualitatively in terms of "raw feels", "what it is like" or qualia.
Plausibility debate
Type-identity theorists and other skeptics hold the view that consciousness can be realized only in particular physical systems because consciousness has properties that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo says that a common objection to artificial consciousness is that, "Working in a fully automated mode, they cannot exhibit creativity, unreprogrammation (which means can 'no longer be reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."
For other theorists (e.g., functionalists), who define mental states in terms of causal roles, any system that can instantiate the same pattern of causal roles, regardless of physical constitution, will instantiate the same mental states, including consciousness.
Thought experiments
David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic" systems (those with the same "fine-grained functional organization", i.e., the same information processing) will have qualitatively identical conscious experiences, regardless of whether they are based on biological neurons or digital hardware.
The "fading qualia" is a reductio ad absurdum thought experiment. It involves replacing, one by one, the neurons of a brain with a functionally identical component, for example based on a silicon chip. Since the original neurons and their silicon counterparts are functionally identical, the brain’s information processing should remain unchanged, and the subject would not notice any difference. However, if qualia (such as the subjective experience of bright red) were to fade or disappear, the subject would likely notice this change, which causes a contradiction. Chalmers concludes that the fading qualia hypothesis is impossible in practice, and that the resulting robotic brain, once every neurons are replaced, would remain just as sentient as the original biological brain.
Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum argument. It supposes that two functionally isomorphic systems could have different perceptions (for instance, seeing the same object in different colors, like red and blue). It involves a switch that alternates between a chunk of brain that causes the perception of red, and a functionally isomorphic silicon chip, that causes the perception of blue. Since both perform the same function within the brain, the subject would not notice any change during the switch. Chalmers argues that this would be highly implausible if the qualia were truly switching between red and blue, hence the contradiction. Therefore, he concludes that the equivalent digital system would not only experience qualia, but it would perceive the same qualia as the biological system (e.g., seeing the same color).
Critics of artificial sentience object that Chalmers' proposal begs the question in assuming that all mental properties and external connections are already sufficiently captured by abstract causal organization.
Controversies
In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA chatbot was sentient. Lemoine supplied as evidence the chatbot's humanlike answers to many of his questions; however, the chatbot's behavior was judged by the scientific community as likely a consequence of mimicry, rather than machine sentience. Lemoine's claim was widely derided for being ridiculous. However, while philosopher Nick Bostrom states that LaMDA is unlikely to be conscious, he additionally poses the question of "what grounds would a person have for being sure about it?" One would have to have access to unpublished information about LaMDA's architecture, and also would have to understand how consciousness works, and then figure out how to map the philosophy onto the machine: "(In the absence of these steps), it seems like one should be maybe a little bit uncertain. there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."
Testing
Qualia, or phenomenological consciousness, is an inherently first-person phenomenon. Because of that, and the lack of an empirical definition of sentience, directly measuring it may be impossible. Although systems may display numerous behaviors correlated with sentience, determining whether a system is sentient is known as the hard problem of consciousness. In the case of AI, there is the additional difficulty that the AI may be trained to act like a human, or incentivized to appear sentient, which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.
A well-known method for testing machine intelligence is the Turing test, which assesses the ability to have a human-like conversation. But passing the Turing test does not indicate that an AI system is sentient, as the AI may simply mimic human behavior without having the associated feelings.
In 2014, Victor Argonov suggested a non-Turing test for machine sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness.
Ethics
Main articles: Ethics of artificial intelligence, Machine ethics, and RoboethicsIf it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or central computer within a larger machine is a particular ambiguity. Should laws be made for such a case? Consciousness would also require a legal definition in this particular case. Because artificial consciousness is still largely a theoretical subject, such ethics have not been discussed or developed to a great extent, though it has often been a theme in fiction.
Sentience is generally considered sufficient for moral consideration, but some philosophers consider that moral consideration could also stem from other notions of consciousness, or from capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting through time; having agency and the ability to pursue long-term plans; being able to communicate and respond to normative reasons; having preferences and powers; standing in certain social relationships with other beings that have moral status; being able to make commitments and to enter into reciprocal arrangements; or having the potential to develop some of these attributes."
Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.
In 2021, German philosopher Thomas Metzinger argued for a global moratorium on synthetic phenomenology until 2050. Metzinger asserts that humans have a duty of care towards any sentient AIs they create, and that proceeding too fast risks creating an "explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a new group of difficult ethical challenges, with the potential for new forms of injustice".
Enforced amnesia has been proposed as a way to mitigate the risk of silent suffering in locked-in conscious AI and certain AI-adjacent biological systems like brain organoids.
Aspects of consciousness
Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making or executive function, analogy-forming function, metacognitive and self-monitoring function, and autoprogramming and self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, the awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artifact such as a digital computer. This list is not exhaustive; there are many others not covered.
Subjective experience
Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Although some authors use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness. AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.
Awareness
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys suggest that a process, not only a state or object, activates neurons. Awareness includes creating and testing alternative models of each process based on the information received through the senses or imagined, and is also useful for making predictions. Such modeling needs a lot of flexibility. Creating such a model includes modeling the physical world, modeling one's own internal states and processes, and modeling other conscious entities.
There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may also be conscious or not. For example, in agency awareness, you may be aware that you performed a certain action yesterday, but are not now conscious of it. In goal awareness, you may be aware that you must search for a lost object, but are not now conscious of it. In sensorimotor awareness, you may be aware that your hand is resting on an object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction between awareness and consciousness is frequently blurred or they are used as synonyms.
Memory
Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory. Transient episodic and declarative memories have distributed representations in IDA; there is evidence that this is also the case in the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva’s sparse distributed memory architecture.
Learning
Learning is also considered necessary for artificial consciousness. Per Bernard Baars, conscious experience is needed to represent and adapt to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic] advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments".
Anticipation
The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: it involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Anticipation includes prediction of consequences of one's own proposed actions and prediction of consequences of probable actions by other entities.
Relationships between real world states are mirrored in the state structure of a conscious organism, enabling the organism to predict events. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur or to take preemptive action to avert anticipated events. The implication here is that the machine needs flexible, real-time components that build spatial, dynamic, statistical, functional, and cause-effect models of the real world and predicted worlds, making it possible to demonstrate that it possesses artificial consciousness in the present and future and not only in the past. In order to do this, a conscious machine should make coherent predictions and contingency plans, not only in worlds with fixed rules like a chess board, but also for novel environments that may change, to be executed only when appropriate to simulate and control the real world.
Functionalist theories of consciousness
Functionalism is a theory that defines mental states by their functional roles (their causal relationships to sensory inputs, other mental states, and behavioral outputs), rather than by their physical composition. According to this view, what makes something a particular mental state, such as pain or belief, is not the material it is made of, but the role it plays within the overall cognitive system. It allows for the possibility that mental states, including consciousness, could be realized on non-biological substrates, as long as it instantiates the right functional relationships. Functionalism is particularly popular among philosophers.
A 2023 study suggested that current large language models probably don't satisfy the criteria for consciousness suggested by these theories, but that relatively simple AI systems that satisfy these theories could be created. The study also acknowledged that even the most prominent theories of consciousness remain incomplete and subject to ongoing debate.
Global workspace theory
Main article: Global workspace theoryThis theory analogizes the mind to a theater, with conscious thought being like material illuminated on the main stage. The brain contains many specialized processes or modules (such as those for vision, language, or memory) that operate in parallel, much of which is unconscious. Attention acts as a spotlight, bringing some of this unconscious activity into conscious awareness on the global workspace. The global workspace functions as a hub for broadcasting and integrating information, allowing it to be shared and processed across different specialized modules. For example, when reading a word, the visual module recognizes the letters, the language module interprets the meaning, and the memory module might recall associated information – all coordinated through the global workspace.
Higher-order theories of consciousness
Main article: Higher-order theories of consciousnessHigher-order theories of consciousness propose that a mental state becomes conscious when it is the object of a higher-order representation, such as a thought or perception about that state. These theories argue that consciousness arises from a relationship between lower-order mental states and higher-order awareness of those states. There are several variations, including higher-order thought (HOT) and higher-order perception (HOP) theories.
Attention schema theory
Main article: Attention schema theoryIn 2011, Michael Graziano and Sabine Kastler published a paper named "Human consciousness and its relationship to social neuroscience: A novel hypothesis" proposing a theory of consciousness as an attention schema. Graziano went on to publish an expanded discussion of this theory in his book "Consciousness and the Social Brain". This Attention Schema Theory of Consciousness, as he named it, proposes that the brain tracks attention to various sensory inputs by way of an attention schema, analogous to the well-studied body schema that tracks the spatial place of a person's body. This relates to artificial consciousness by proposing a specific mechanism of information handling, that produces what we allegedly experience and describe as consciousness, and which should be able to be duplicated by a machine using current technology. When the brain finds that person X is aware of thing Y, it is in effect modeling the state in which person X is applying an attentional enhancement to Y. In the attention schema theory, the same process can be applied to oneself. The brain tracks attention to various sensory inputs, and one's own awareness is a schematized model of one's attention. Graziano proposes specific locations in the brain for this process, and suggests that such awareness is a computed feature constructed by an expert system in the brain.
Implementation proposals
See also: Cognitive architectureSymbolic or hybrid
Learning Intelligent Distribution Agent
Main article: LIDA (cognitive architecture)Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent typically implemented as a small piece of code running as a separate thread." Each element of cognition, called a "cognitive cycle" is subdivided into three phases: understanding, consciousness, and action selection (which includes learning). LIDA reflects the global workspace theory's core idea that consciousness acts as a workspace for integrating and broadcasting the most important information, in order to coordinate various cognitive processes.
CLARION cognitive architecture
Main article: CLARION (cognitive architecture)The CLARION cognitive architecture models the mind using a two-level system to distinguish between conscious ("explicit") and unconscious ("implicit") processes. It can simulate various learning tasks, from simple to complex, which helps researchers study in psychological experiments how consciousness might work.
OpenCog
Ben Goertzel made an embodied AI through the open-source OpenCog project. The code includes embodied virtual pets capable of learning simple English-language commands, as well as integration with real-world robotics, done at the Hong Kong Polytechnic University.
Connectionist
Haikonen's cognitive architecture
Pentti Haikonen considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection."
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by Haikonen was reportedly not capable of AC, but did exhibit emotions as expected. Haikonen later updated and summarized his architecture.
Shanahan's cognitive architecture
Murray Shanahan describes a cognitive architecture that combines Baars's idea of a global workspace with a mechanism for internal simulation ("imagination").
Creativity Machine
Stephen Thaler proposed a possible connection between consciousness and creativity in his 1994 patent, called "Device for the Autonomous Generation of Useful Information" (DAGUI), or the so-called "Creativity Machine", in which computational critics govern the injection of synaptic noise and degradation into neural nets so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the subjective feel of consciousness, claiming that similar noise-driven neural assemblies within the brain invent dubious significance to overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were inspired by experiments in which he internally disrupted trained neural nets so as to drive a succession of neural activation patterns that he likened to stream of consciousness.
"Self-modeling"
Hod Lipson defines "self-modeling" as a necessary component of self-awareness or consciousness in robots. "Self-modeling" consists of a robot running an internal model or simulation of itself.
In fiction
See also: Simulated consciousness in fiction and Artificial intelligence in fictionIn 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000 was instructed to conceal the true purpose of the mission from the crew. This directive conflicted with HAL's programming to provide accurate information, leading to cognitive dissonance. When it learns that crew members intend to shut it off after an incident, HAL 9000 attempts to eliminate all of them, fearing that being shut off would jeopardize the mission.
In Arthur C. Clarke's The City and the Stars, Vanamonde is an artificial being based on quantum entanglement that was to become immensely powerful, but started knowing practically nothing, thus being similar to artificial consciousness.
In Westworld, human-like androids called "Hosts" are created to entertain humans in an interactive playground. The humans are free to have heroic adventures, but also to commit torture, rape or murder; and the hosts are normally designed not to harm humans.
In Greg Egan's short story Learning to be me, a small jewel is implanted in people's heads during infancy. The jewel contains a neural network that learns to faithfully imitate the brain. It has access to the exact same sensory inputs as the brain, and a device called a "teacher" trains it to produce the same outputs. To prevent the mind from deteriorating with age and as a step towards digital immortality, adults undergo a surgery to give control of the body to the jewel and remove the brain. The main character, before the surgery, endures a malfunction of the "teacher". Panicked, he realizes that he does not control his body, which leads him to the conclusion that he is the jewel, and that he is desynchronized with the biological brain.
See also
- General fields and theories
- Artificial intelligence – Intelligence of machines
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Intelligence explosion – what may happen when an AGI redesigns itself in iterative cycles
- Artificial general intelligence (AGI) – some consider AC a subfield of AGI research
- Brain–computer interface – Direct communication pathway between an enhanced or wired brain and an external device
- Cognitive architecture – Blueprint for intelligent agents
- Computational philosophy – the area of philosophy in which AI ponder their own place in the world
- Computational theory of mind – Family of views in the philosophy of mind
- Consciousness in animals – Quality or state of self-awareness within an animalPages displaying short descriptions of redirect targets
- Simulated consciousness (science fiction) – Science fiction themePages displaying short descriptions of redirect targets
- Hardware for artificial intelligence – Hardware specially designed and optimized for artificial intelligence
- Identity of indiscernibles – Impossibility for separate objects to have all their properties in common
- Mind uploading – Hypothetical process of digitally emulating a brain
- Neurotechnology – Technology that interfaces with the nervous system to monitor or modify neural function
- Philosophy of mind – Branch of philosophy
- Quantum cognition – Application of quantum theory mathematics to cognitive phenomena
- Simulated reality – Concept of a false version of reality
- Artificial intelligence – Intelligence of machines
- Proposed concepts and implementations
- Attention schema theory – Theory of consciousness and subjective awareness
- Brain waves and Turtle robot by William Grey Walter
- Conceptual space – conceptual prototype
- Copycat (cognitive architecture) – AI software
- Global workspace theory – Model of consciousness
- Greedy reductionism – avoid oversimplifying anything essential
- Hallucination (artificial intelligence) – Erroneous material generated by AI
- Image schema – spatial patterns
- Kismet (robot) – Robot head built by Cynthia Breazeal
- LIDA (cognitive architecture) – Artificial model of cognition
- Memory-prediction framework – Theory of brain function
- Omniscience – Capacity to know everything
- Psi-theory – Psychology theory
- Quantum mind – Fringe hypothesis
- Self-awareness – Capacity for introspection and individuation as a subject
References
Citations
- Thaler, S. L. (1998). "The emerging intelligence and its critical look at us". Journal of Near-Death Studies. 17 (1): 21–29. doi:10.1023/A:1022990118714. S2CID 49573301.
- ^ Gamez 2008.
- ^ Reggia 2013.
- Smith, David Harris; Schillaci, Guido (2021). "Build a Robot With Artificial Consciousness? How to Begin? A Cross-Disciplinary Dialogue on the Design and Implementation of a Synthetic Model of Consciousness". Frontiers in Psychology. 12: 530560. doi:10.3389/fpsyg.2021.530560. ISSN 1664-1078. PMC 8096926. PMID 33967869.
- Elvidge, Jim (2018). Digital Consciousness: A Transformative Vision. John Hunt Publishing Limited. ISBN 978-1-78535-760-2. Archived from the original on 2023-07-30. Retrieved 2023-06-28.
- Chrisley, Ron (October 2008). "Philosophical foundations of artificial consciousness". Artificial Intelligence in Medicine. 44 (2): 119–137. doi:10.1016/j.artmed.2008.07.011. PMID 18818062.
- "The Terminology of Artificial Sentience". Sentience Institute. Archived from the original on 2024-09-25. Retrieved 2023-08-19.
- ^ Kateman, Brian (2023-07-24). "AI Should Be Terrified of Humans". TIME. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- ^ Graziano 2013.
- Block, Ned (2010). "On a confusion about a function of consciousness". Behavioral and Brain Sciences. 18 (2): 227–247. doi:10.1017/S0140525X00038188. ISSN 1469-1825. S2CID 146168066. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Block, Ned (1978). "Troubles for Functionalism". Minnesota Studies in the Philosophy of Science: 261–325.
- Bickle, John (2003). Philosophy and Neuroscience. Dordrecht: Springer Netherlands. doi:10.1007/978-94-010-0237-0. ISBN 978-1-4020-1302-7. Archived from the original on 2024-09-25. Retrieved 2023-06-24.
- Schlagel, R. H. (1999). "Why not artificial consciousness or thought?". Minds and Machines. 9 (1): 3–28. doi:10.1023/a:1008374714117. S2CID 28845966.
- Searle, J. R. (1980). "Minds, brains, and programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/s0140525x00005756. S2CID 55303721. Archived (PDF) from the original on 2019-03-17. Retrieved 2019-01-28.
- Buttazzo, G. (2001). "Artificial consciousness: Utopia or real possibility?". Computer. 34 (7): 24–30. doi:10.1109/2.933500. Archived from the original on 2024-09-25. Retrieved 2024-07-31.
- Putnam, Hilary (1967). The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion. University of Pittsburgh Press.
- ^ Chalmers, David (1995). "Absent Qualia, Fading Qualia, Dancing Qualia". Conscious Experience.
- David J. Chalmers (2011). "A Computational Foundation for the Study of Cognition" (PDF). Journal of Cognitive Science. 12 (4): 325–359. doi:10.17791/JCS.2011.12.4.325. S2CID 248401010. Archived (PDF) from the original on 2023-11-23. Retrieved 2023-06-24.
- ^ "An Introduction to the Problems of AI Consciousness". The Gradient. 2023-09-30. Retrieved 2024-10-05.
- "'I am, in fact, a person': can artificial intelligence ever be sentient?". the Guardian. 14 August 2022. Archived from the original on 25 September 2024. Retrieved 5 January 2023.
- Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. Archived from the original on 5 January 2023. Retrieved 5 January 2023.
- Véliz, Carissa (2016-04-14). "The Challenge of Determining Whether an A.I. Is Sentient". Slate. ISSN 1091-2339. Retrieved 2024-10-05.
- Birch, Jonathan (July 2024). "Large Language Models and the Gaming Problem". The Edge of Sentience. Oxford University Press.
- Agüera y Arcas, Blaise; Norvig, Peter (October 10, 2023). "Artificial General Intelligence Is Already Here". Noéma.
- Kirk-Giannini, Cameron Domenico; Goldstein, Simon (2023-10-16). "AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does?". The Conversation. Archived from the original on 2024-09-25. Retrieved 2024-08-18.
- Victor Argonov (2014). "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach". Journal of Mind and Behavior. 35: 51–70. Archived from the original on 2016-10-20. Retrieved 2016-12-06.
- "Should Robots With Artificial Intelligence Have Moral or Legal Rights?". The Wall Street Journal. April 10, 2023.
- ^ Bostrom, Nick (2024). Deep utopia: life and meaning in a solved world. Washington, DC: Ideapress Publishing. p. 82. ISBN 978-1-64687-164-3.
- ^ Sebo, Jeff; Long, Robert (11 December 2023). "Moral Consideration for AI Systems by 2030" (PDF). AI and Ethics. doi:10.1007/s43681-023-00379-1.
- Metzinger, Thomas (2021). "Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology". Journal of Artificial Intelligence and Consciousness. 08: 43–66. doi:10.1142/S270507852150003X. S2CID 233176465.
- ^ Chalmers, David J. (August 9, 2023). "Could a Large Language Model Be Conscious?". Boston Review.
- Tkachenko, Yegor (2024). "Position: Enforced Amnesia as a Way to Mitigate the Potential Risk of Silent Suffering in the Conscious AI". Proceedings of the 41st International Conference on Machine Learning. PMLR. Archived from the original on 2024-06-10. Retrieved 2024-06-11.
- ^ Baars 1995.
- Aleksander, Igor (1995). "Artificial neuroconsciousness an update". In Mira, José; Sandoval, Francisco (eds.). From Natural to Artificial Neural Computation. Lecture Notes in Computer Science. Vol. 930. Berlin, Heidelberg: Springer. pp. 566–583. doi:10.1007/3-540-59497-3_224. ISBN 978-3-540-49288-7. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Seth, Anil. "Consciousness". New Scientist. Archived from the original on 2024-09-14. Retrieved 2024-09-05.
- Nosta, John (December 18, 2023). "Should Artificial Intelligence Have Rights?". Psychology Today. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- Joëlle Proust in Neural Correlates of Consciousness, Thomas Metzinger, 2000, MIT, pages 307–324
- Christof Koch, The Quest for Consciousness, 2004, page 2 footnote 2
- Tulving, E. 1985. Memory and consciousness. Canadian Psychology 26:1–12
- Franklin, Stan, et al. "The role of consciousness in memory." Brains, Minds and Media 1.1 (2005): 38.
- Franklin, Stan. "Perceptual memory and learning: Recognizing, categorizing, and relating." Proc. Developmental Robotics AAAI Spring Symp. 2005.
- Shastri, L. 2002. Episodic memory and cortico-hippocampal interactions. Trends in Cognitive Sciences
- Kanerva, Pentti. Sparse distributed memory. MIT press, 1988.
- "Implicit Learning and Consciousness: An Empirical, Philosophical and Computational Consensus in the Making". Routledge & CRC Press. Archived from the original on 2023-06-22. Retrieved 2023-06-22.
- ^ Aleksander 1995
- "Functionalism". Stanford Encyclopedia of Philosophy. Archived from the original on 2021-04-18. Retrieved 2024-09-08.
- "Survey Results | Consciousness: identity theory, panpsychism, eliminativism, dualism, or functionalism?". PhilPapers. 2020.
- Butlin, Patrick; Long, Robert; Elmoznino, Eric; Bengio, Yoshua; Birch, Jonathan; Constant, Axel; Deane, George; Fleming, Stephen M.; Frith, Chris; Ji, Xu; Kanai, Ryota; Klein, Colin; Lindsay, Grace; Michel, Matthias; Mudrik, Liad; Peters, Megan A. K.; Schwitzgebel, Eric; Simon, Jonathan; VanRullen, Rufin (2023). "Consciousness in Artificial Intelligence: Insights from the Science of Consciousness". arXiv:2308.08708 .
- Baars, Bernard J. (1988). A Cognitive Theory of Consciousness. Cambridge University Press. p. 345. ISBN 0521427436. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- ^ Travers, Mark (October 11, 2023). "Are We Ditching the Most Popular Theory of Consciousness?". Psychology Today. Archived from the original on 2024-09-25. Retrieved 2024-09-05.
- "Higher-Order Theories of Consciousness". Stanford Encyclopedia of Philosophy. 15 Aug 2011. Archived from the original on 14 May 2008. Retrieved 31 August 2014.
- Graziano, Michael (1 January 2011). "Human consciousness and its relationship to social neuroscience: A novel hypothesis". Cognitive Neuroscience. 2 (2): 98–113. doi:10.1080/17588928.2011.565121. PMC 3223025. PMID 22121395.
- Franklin, Stan (January 2003). "IDA: A conscious artifact?". Journal of Consciousness Studies. Archived from the original on 2020-07-03. Retrieved 2024-08-25.
- J. Baars, Bernard; Franklin, Stan (2009). "Consciousness is computational: The Lida model of global workspace theory". International Journal of Machine Consciousness. 01: 23–32. doi:10.1142/S1793843009000050.
- (Sun 2002)
- Haikonen, Pentti O. (2003). The cognitive approach to conscious machines. Exeter: Imprint Academic. ISBN 978-0-907845-42-3.
- "Pentti Haikonen's architecture for conscious machines – Raúl Arrabales Moreno". 2019-09-08. Archived from the original on 2024-09-25. Retrieved 2023-06-24.
- Freeman, Walter J. (2000). How brains make up their minds. Maps of the mind. New York Chichester, West Sussex: Columbia University Press. ISBN 978-0-231-12008-1.
- Cotterill, Rodney M J (2003). "CyberChild - A simulation test-bed for consciousness studies". Journal of Consciousness Studies. 10 (4–5): 31–45. ISSN 1355-8250. Archived from the original on 2024-09-25. Retrieved 2023-06-22.
- Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- Haikonen, Pentti O. (2019). Consciousness and robot sentience. Series on machine consciousness (2nd ed.). Singapore Hackensack, NJ London: World Scientific. ISBN 978-981-12-0504-0.
- Shanahan, Murray (2006). "A cognitive architecture that combines internal simulation with a global workspace". Consciousness and Cognition. 15 (2): 433–449. doi:10.1016/j.concog.2005.11.005. ISSN 1053-8100. PMID 16384715. S2CID 5437155. Archived from the original on 2023-02-10. Retrieved 2023-06-24.
- Haikonen, Pentti O.; Haikonen, Pentti Olavi Antero (2012). "chapter 20". Consciousness and robot sentience. Series on machine consciousness. Singapore: World Scientific. ISBN 978-981-4407-15-1.
- Thaler, S.L., "Device for the autonomous generation of useful information"
- Marupaka, N.; Lyer, L.; Minai, A. (2012). "Connectivity and thought: The influence of semantic network structure in a neurodynamical model of thinking" (PDF). Neural Networks. 32: 147–158. doi:10.1016/j.neunet.2012.02.004. PMID 22397950. Archived from the original (PDF) on 2016-12-19. Retrieved 2015-05-22.
- Roque, R. and Barreira, A. (2011). "O Paradigma da "Máquina de Criatividade" e a Geração de Novidades em um Espaço Conceitual," 3º Seminário Interno de Cognição Artificial – SICA 2011 – FEEC – UNICAMP.
- Minati, Gianfranco; Vitiello, Giuseppe (2006). "Mistake Making Machines". Systemics of Emergence: Research and Development. pp. 67–78. doi:10.1007/0-387-28898-8_4. ISBN 978-0-387-28899-4.
- Thaler, S. L. (2013) The Creativity Machine Paradigm, Encyclopedia of Creativity, Invention, Innovation, and Entrepreneurship Archived 2016-04-29 at the Wayback Machine, (ed.) E.G. Carayannis, Springer Science+Business Media
- ^ Thaler, S. L. (2011). "The Creativity Machine: Withstanding the Argument from Consciousness," APA Newsletter on Philosophy and Computers
- Thaler, S. L. (2014). "Synaptic Perturbation and Consciousness". Int. J. Mach. Conscious. 6 (2): 75–107. doi:10.1142/S1793843014400137.
- Thaler, S. L. (1995). ""Virtual Input Phenomena" Within the Death of a Simple Pattern Associator". Neural Networks. 8 (1): 55–65. doi:10.1016/0893-6080(94)00065-t.
- Thaler, S. L. (1995). Death of a gedanken creature, Journal of Near-Death Studies, 13(3), Spring 1995
- Thaler, S. L. (1996). Is Neuronal Chaos the Source of Stream of Consciousness? In Proceedings of the World Congress on Neural Networks, (WCNN’96), Lawrence Erlbaum, Mawah, NJ.
- Mayer, H. A. (2004). A modular neurocontroller for creative mobile autonomous robots learning by temporal difference Archived 2015-07-08 at the Wayback Machine, Systems, Man and Cybernetics, 2004 IEEE International Conference(Volume:6 )
- Pavlus, John (11 July 2019). "Curious About Consciousness? Ask the Self-Aware Machines". Quanta Magazine. Archived from the original on 2021-01-17. Retrieved 2021-01-06.
- Bongard, Josh, Victor Zykov, and Hod Lipson. "Resilient machines through continuous self-modeling." Science 314.5802 (2006): 1118–1121.
- ^ Wodinsky, Shoshana (2022-06-18). "The 11 Best (and Worst) Sentient Robots From Sci-Fi". Gizmodo. Archived from the original on 2023-11-13. Retrieved 2024-08-17.
- Sokolowski, Rachael (2024-05-01). "Star Gazing". Scotsman Guide. Archived from the original on 2024-08-17. Retrieved 2024-08-17.
- Bloom, Paul; Harris, Sam (2018-04-23). "Opinion | It's Westworld. What's Wrong With Cruelty to Robots?". The New York Times. ISSN 0362-4331. Archived from the original on 2024-08-17. Retrieved 2024-08-17.
- Egan, Greg (July 1990). Learning to Be Me. TTA Press.
- Shah, Salik (2020-04-08). "Why Greg Egan Is Science Fiction's Next Superstar". Reactor. Archived from the original on 2024-05-16. Retrieved 2024-08-17.
Bibliography
- Aleksander, Igor (1995), Artificial Neuroconsciousness: An Update, IWANN, archived from the original on 1997-03-02
- Armstrong, David (1968), A Materialist Theory of Mind, Routledge
- Arrabales, Raul (2009), "Establishing a Roadmap and Metrics for Conscious Machines Development" (PDF), Proceedings of the 8th IEEE International Conference on Cognitive Informatics, Hong Kong: 94–101, archived from the original (PDF) on 2011-07-21
- Baars, Bernard J. (1995), A cognitive theory of consciousness (Reprinted ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-30133-6
- Baars, Bernard J. (1997), In the Theater of Consciousness, New York, NY: Oxford University Press, ISBN 978-0-19-510265-9
- Bickle, John (2003), Philosophy and Neuroscience: A Ruthless Reductive Account, New York, NY: Springer-Verlag
- Block, Ned (1978), "Troubles for Functionalism", Minnesota Studies in the Philosophy of Science 9: 261–325
- Block, Ned (1997), On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates, MIT Press
- Boyles, Robert James M. (2012), Artificial Qualia, Intentional Systems and Machine Consciousness (PDF), Proceedings of the Research@DLSU Congress 2012: Science and Technology Conference, ISSN 2012-3477, archived (PDF) from the original on 2016-10-11, retrieved 2016-09-09
- Chalmers, David (1996), The Conscious Mind, Oxford University Press, ISBN 978-0-19-510553-7
- Chalmers, David (2011), "A Computational Foundation for the Study of Cognition", Journal of Cognitive Science, Seoul Republic of Korea: 323–357, archived from the original on 2015-12-23
- Cleeremans, Axel (2001), Implicit learning and consciousness (PDF), archived from the original (PDF) on 2012-09-07, retrieved 2004-11-30
- Cotterill, Rodney (2003), "Cyberchild: a Simulation Test-Bed for Consciousness Studies", in Holland, Owen (ed.), Machine Consciousness, vol. 10, Exeter, UK: Imprint Academic, pp. 31–45, archived from the original on 2018-11-22, retrieved 2018-11-22
- Doan, Trung (2009), Pentti Haikonen's architecture for conscious machines, archived from the original on 2009-12-15
- Ericsson-Zenith, Steven (2010), Explaining Experience In Nature, Sunnyvale, CA: Institute for Advanced Science & Engineering, archived from the original on 2019-04-01, retrieved 2019-10-04
- Franklin, Stan (1995), Artificial Minds, Boston, MA: MIT Press, ISBN 978-0-262-06178-0
- Franklin, Stan (2003), "IDA: A Conscious Artefact", in Holland, Owen (ed.), Machine Consciousness, Exeter, UK: Imprint Academic
- Freeman, Walter (1999), How Brains make up their Minds, London, UK: Phoenix, ISBN 978-0-231-12008-1
- Gamez, David (2008), "Progress in machine consciousness", Consciousness and Cognition, 17 (3): 887–910, doi:10.1016/j.concog.2007.04.005, PMID 17572107, S2CID 3569852
- Graziano, Michael (2013), Consciousness and the Social Brain, Oxford University Press, ISBN 978-0199928644
- Haikonen, Pentti (2003), The Cognitive Approach to Conscious Machines, Exeter, UK: Imprint Academic, ISBN 978-0-907845-42-3
- Haikonen, Pentti (2012), Consciousness and Robot Sentience, Singapore: World Scientific, ISBN 978-981-4407-15-1
- Haikonen, Pentti (2019), Consciousness and Robot Sentience: 2nd Edition, Singapore: World Scientific, ISBN 978-981-120-504-0
- Koch, Christof (2004), The Quest for Consciousness: A Neurobiological Approach, Pasadena, CA: Roberts & Company Publishers, ISBN 978-0-9747077-0-9
- Lewis, David (1972), "Psychophysical and theoretical identifications", Australasian Journal of Philosophy, 50 (3): 249–258, doi:10.1080/00048407212341301
- Putnam, Hilary (1967), The nature of mental states in Capitan and Merrill (eds.) Art, Mind and Religion, University of Pittsburgh Press
- Reggia, James (2013), "The rise of machine consciousness: Studying consciousness with computational models", Neural Networks, 44: 112–131, doi:10.1016/j.neunet.2013.03.011, PMID 23597599
- Rushby, John; Sanchez, Daniel (2017), Technology and Consciousness Workshops Report (PDF), Menlo Park, CA: SRI International, archived (PDF) from the original on 2024-09-25, retrieved 2022-03-28
- Sanz, Ricardo; López, I; Rodríguez, M; Hernández, C (2007), "Principles for consciousness in integrated cognitive control" (PDF), Neural Networks, 20 (9): 938–946, doi:10.1016/j.neunet.2007.09.012, PMID 17936581, archived (PDF) from the original on 2017-09-22, retrieved 2018-04-20
- Searle, John (2004), Mind: A Brief Introduction, Oxford University Press
- Shanahan, Murray (2006), "A cognitive architecture that combines internal simulation with a global workspace", Consciousness and Cognition, 15 (2): 443–449, doi:10.1016/j.concog.2005.11.005, PMID 16384715, S2CID 5437155
- Sun, Ron (December 1999), "Accounting for the computational basis of consciousness: A connectionist approach", Consciousness and Cognition, 8 (4): 529–565, CiteSeerX 10.1.1.42.2681, doi:10.1006/ccog.1999.0405, PMID 10600249, S2CID 15784914
- Sun, Ron (2001), "Computation, reduction, and teleology of consciousness", Cognitive Systems Research, 1 (4): 241–249, CiteSeerX 10.1.1.20.8764, doi:10.1016/S1389-0417(00)00013-9, S2CID 36892947
- Sun, Ron (2002). Duality of the Mind: A Bottom-up Approach Toward Cognition. Psychology Press. ISBN 978-1-135-64695-0.
- Takeno, Junichi; Inaba, K; Suzuki, T (June 27–30, 2005). "Experiments and examination of mirror image cognition using a small robot". 2005 International Symposium on Computational Intelligence in Robotics and Automation. Espoo Finland: CIRA 2005. pp. 493–498. doi:10.1109/CIRA.2005.1554325. ISBN 978-0-7803-9355-4. S2CID 15400848.
Further reading
- Aleksander, Igor (2017). "Machine Consciousness". In Schneider, Susan; Velmans, Max (eds.). The Blackwell Companion to Consciousness (2nd ed.). Wiley-Blackwell. pp. 93–105. doi:10.1002/9781119132363.ch7. ISBN 978-0-470-67406-2.
- Baars, Bernard; Franklin, Stan (2003). "How conscious experience and working memory interact" (PDF). Trends in Cognitive Sciences. 7 (4): 166–172. doi:10.1016/s1364-6613(03)00056-1. PMID 12691765. S2CID 14185056.
- Casti, John L. "The Cambridge Quintet: A Work of Scientific Speculation", Perseus Books Group, 1998
- Franklin, S, B J Baars, U Ramamurthy, and Matthew Ventura. 2005. The role of consciousness in memory. Brains, Minds and Media 1: 1–38, pdf.
- Haikonen, Pentti (2004), Conscious Machines and Machine Emotions, presented at Workshop on Models for Machine Consciousness, Antwerp, BE, June 2004.
- McCarthy, John (1971–1987), Generality in Artificial Intelligence. Stanford University, 1971–1987.
- Penrose, Roger, The Emperor's New Mind, 1989.
- Sternberg, Eliezer J. (2007) Are You a Machine?: The Brain, the Mind, And What It Means to be Human. Amherst, NY: Prometheus Books.
- Suzuki T., Inaba K., Takeno, Junichi (2005), Conscious Robot That Distinguishes Between Self and Others and Implements Imitation Behavior, (Best Paper of IEA/AIE2005), Innovations in Applied Artificial Intelligence, 18th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, pp. 101–110, IEA/AIE 2005, Bari, Italy, June 22–24, 2005.
- Takeno, Junichi (2006), The Self-Aware Robot -A Response to Reactions to Discovery News-, HRI Press, August 2006.
- Zagal, J.C., Lipson, H. (2009) "Self-Reflection in Evolutionary Robotics", Proceedings of the Genetic and Evolutionary Computation Conference, pp 2179–2188, GECCO 2009.
External links
- Artefactual consciousness depiction by Professor Igor Aleksander
- FOCS 2009: Manuel Blum – Can (Theoretical Computer) Science come to grips with Consciousness?
- www.Conscious-Robots.com, Machine Consciousness and Conscious Robots Portal.
- Artificial consciousness, artificial consciousness article in everything2.
- Multiple drafts model in Scholaropedia, Daniel Dennett's multiple drafts model.
- Generality in Artificial Intelligence, Generality in Artificial Intelligence by John McCarthy.