Misplaced Pages

Talk:Artificial consciousness: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 14:38, 1 December 2004 edit80.3.32.9 (talk) NEUTRALITY!← Previous edit Revision as of 14:40, 1 December 2004 edit undo80.3.32.9 (talk)No edit summaryNext edit →
Line 51: Line 51:


7. It does not mention the suggestion of many authors from Searle to Penrose that artificial consciousness may require physical phenomena that are not part of ]. 7. It does not mention the suggestion of many authors from Searle to Penrose that artificial consciousness may require physical phenomena that are not part of ].

==Comment==
The link to the ] on the article page is a link to a patent nonsense site. ] 14:09, 2 May 2004 (UTC)

:Yep, don't appreciate it either. Is it really institute? ] 16:22, 2 May 2004 (UTC)

==Archived Discussion==

*]
*]
*]
*]
*]
*]
*]
*]
*]
*]
*]

=="Supposed Experts" in Artificial Consciousness==

Let us attempt to reach consensus on who are the leading proponents of ''artificial consciousness'' and its relation to ''artificial intelligence''. OK? ] 08:44, 26 Apr 2004 (UTC)

Various suggestions (please add to list):

*] - consciousness is a central interest as is AC
*] - wrote ''Consciousness Explained'' and deals with AC
*] - denies AC though most scientists dismiss his reasoning
*] - subjective experience
*'''Igor Aleksander''' - AC
*'''Owen Holland''' - AC
*'''Rod Goodman''' - AC
*'''Sam S. Adams''' - AC - Joshua Blue project, IBM Research
*] - Nobel prize winner, respected ideas on the theory of mind
*]
*]
----

:But Hofstadter, Dennett, Penrose, and Nagel are philosophers of AI or philosophers of mind. They use the term "AI" or "consciousness", but not "artificial consciousness". I don't really have information on the others. ] 16:05, 26 Apr 2004 (UTC)

::AC can perhaps be seen as a part of AI or as a part of Consciousness. Interestingly Misplaced Pages's article on Hofstadter says he is interested in consciousness not intelligence. Readers of his work will know, of course, that he is interested in both. But in his case it may be wrong to say that he is interested in AC only as a part of AI. I believe the same comments apply to Dennett who wrote a book entitled ''Consciousness Explained'', not a book called ''Intelligence Explained''. Penrose is a celebrated mathematical physicist, but a philosopher of dubious importance. ] 22:16, 26 Apr 2004 (UTC)

::I included Nagel only because his subjective experience concept has importance for AC. Dennett talked about of what should be considered AC, but the same thing (AC) is unfortunately often referred to under different names, what may not be proper. The only importance of Penrose is that he denies AC (but not all subfields of AI), Hofstadter writes a general philosophy similar to Dennett, what also touches AC. At present Igor Aleksander, Owen Holland and Rod Goodman work on big project to create a conscious robot http://www.guardian.co.uk/uk_news/story/0,3604,1028776,00.html ] 19:13, 26 Apr 2004 (UTC)

==AC and Strong AI==
:How about moving 'artificial consciousness' over to ] instead? ] 16:19, 26 Apr 2004 (UTC)

I disagree, strong AI is a limited proposition about information processors (digital computers/Turing machines) artificial consciousness is about any type of machine, including those based on a 20,000 gene set of DNA strands.


::There is no '''strong AI''' article, the link refers to AI article where strong AI is also mentioned. And there is no strong AI theory, also there are no strong AI projects, there are no strong AI programs, it is usually only mentioned in comparison to weak AI. As much as I know, so far only AC is something what supposed to theoretically perform of what strong AI supposed to, but it is not the same as strong AI, and likely more genuine compared to consciousness than strong AI. There is no place in AI article where information about AC can be written, also why this approach cannot be separate article when the subfields of AI are? If there was enough to say about strong AI, then it could be a separate article as well, the reason why it was not a separate article was just that probably nobody knew what to write about it, it is not determined at all what strong AI is. ] 19:30, 26 Apr 2004 (UTC)

:::Another tiresome set of assertions wrongly presented as hard fact. There '''is''' a strong AI theory, there '''are''' strong AI projects, the field of AI is vehemently split between believers of the weak and strong AI theories, strong AI is '''not''' only mentioned in contrast to weak AI. No one who has read widely could believe otherwise. ] 22:16, 26 Apr 2004 (UTC)

::::Then say one strong AI theory, or one strong AI project, or even one good strong AI definition. I read AI forums and followed AI for a long time, and saw that it's not clear for almost anybody what it is. ] 01:10, 27 Apr 2004 (UTC)

:::::Now you deny even a definition! If I provide an example will you admit you are wrong? ] 11:23, 27 Apr 2004 (UTC)

::I think the science is plain: AC can be real/genuine/true consciousness. This is a minority opinion on this talk page but is nevertheless a popular view among computer scientists and philosophers of mind. As such I think AC fits into the '''Strong AI''' section of the ] article rather well but I think a separate article is a better idea. Those here who (in defiance of the ] and ] and the ]) think that AC can only ever be a simulation of real/genuine/true consciousness can not, in my opinion, be happy with including AC into Strong AI because Strong AI is real/genuine/true intelligence - not a simulation of it. ] 22:16, 26 Apr 2004 (UTC)

:::It may be plain for you, but at least it's not that plain for ], it's also that without understanding or considering everything, things look like much more simple. ] 01:24, 27 Apr 2004 (UTC)

::Even if AC does turn out to be a branch of AI, I think there is still room for an article to clarify that point - separate from the AI article. If it does turn out that AC can be developed as an end in its own right, as I had always assumed, then even more point in it having a separate article. I have no great learning in this field, and I come here primarily to learn. Can we have summaries of the main proponents' arguments about consciousness (as distinct from intelligence) in the main article, please? My offerings, for what they are worth are as follows:
::1) There isn't, from what I have gleaned, a project to pursue the development solely of a machine implementation of consciousness for its own sake, or even as part of some other purposeful endeavour;

:::I think for many researchers AC is the holy grail. ''Growing up with Lucy'' by Steve Grand is perhaps not an example with much promise but that is surely his goal? ] 09:45, 30 Apr 2004 (UTC)

::::"Non-disciplinary" philosophy of "general purpose building blocks," no software "yet". Like building a house from bricks. ] 15:25, 30 Apr 2004 (UTC)

::2) I do not agree that there should necessarily be a strict dichotomy between weak and strong ''consciousness'' - the distinction certainly doesn't arise with natural forms of consciousness, and therefore a coherent definition of AC ''per se'' is required before we can perhaps make distinctions that were developed for the AI Topic;

:::If by ''strong'' and ''weak'' you mean ''genuine/real/true'' and ''non-genuine/simulated/pretend'' then, of course, the distinction does not arise in natural consciousness. Except when we pretend to be asleep! ] 09:45, 30 Apr 2004 (UTC)

::3) I can imagine implementations of AC in various contexts that might be considered just as art - if they have no obvious function - or as entertainment. Though I agree about Misplaced Pages not being primary research, it does have the propensity to make connections (links) between subject-matter that doesn't occur anywhere else. An article that bridges the gap between what SF writers imagine in their stories and what is both theoretically and practically possible from an engineering perspective is a useful endeavour and doesn't, I rhink, conflict with what Misplaced Pages is about. ] 00:41, 27 Apr 2004 (UTC)

:::Yes! You can buy solar powered (non-flying) decorative butterflies using muscles wires on some robotics sites. I reckon with a bit of tweaking they could have a genuine consciousness in excess of the most advanced thermostat. ] 09:45, 30 Apr 2004 (UTC)

::::As much as I know no scientist ever argued that thermostat is conscious.Thermostat is a Chalmers example what he didn't provide because he himself thinks that thermostat is conscious, but for reductio ad absurdum the Lloyd's argument that the connectionist models might shed light to subjective aspects of consciousness described by Nagel. "On the face of it, this approach is put forward as a way of dealing with Nagel's worries about consciousness, where the central mystery is: why is there something it is like to be us at all? There is a huge prima facie mystery about how any sort of physical system could possess conscious experience. Lloyd holds out the promise that connectionist models might shed light on this question, but at the end of the day the models seem to leave the key explanatory question unanswered. Even if we were to go out on a limb and suppose that these simple systems are conscious, the question of explanation would still remain untouched." http://jamaica.u.arizona.edu/~chalmers/notes/lloyd-comments.html (article by Chalmers). ] 16:33, 1 May 2004 (UTC)

:::::Even this selective quote does not say quite what Tkorrovi seems to think. See below discussion in Thermostat section. ] 12:15, 3 May 2004 (UTC)

The reason why there is no "strong AI project" in the way you think of it is because the real work done in "strong AI" is in '''philosophy'''. You can be an engineer and say, "Since no one is creating strong AI, I will do it!" But how do you start? What makes your project not weak AI? Obviously, we haven't figured out what is necessary or sufficient for consciousness.

Weak AI is important for strong AI because it serves as inductive evidence for consciousness/intelligence. That is, if we can make something that ''seems'' conscious/intelligent, there is a possibility that it is truly conscious. If we can't even make something that exhibits intelligent/conscious ''behaviour'', then that it is ''truly'' conscious is out of the question. Hence, the idea behind the Turing Test is worth mentioning.

Lastly, Dennett, Nagel, etc. are not "experts" of AC/strong AI in that what they say must be true. Philosophy of mind and AI is an ongoing debate, and what they say is only one side of the picture. Characteristic of philosophy, there are many competing theories that exist.

--] 10:33, 27 Apr 2004 (UTC)
:Whilst AI seems to have been dominated by philosphers and its theories disappearing up their own arse (i.e. not leading to particular progress in the engineering field), research into consciousness is focused in ], in particular using scanners to correlate perceptual and cognitive activity with neural activity. There may one day be (if there isn't already) a mental map based on this research that equates to the human genome project (already completed). My suggestion is that artificial implementations that attempt to simulate the operation of conscious processes, as evinced from such neurological research, is the path towards AC actualisation. This will leave philosophy way behind. I am not concerned with the philosophers' views on what constitutes consciousness or not - they'll never reach a consensus, just as they've never reached consensus even on logic or ethics. However if I can produce and sell a machine whose advertising claims that that machine is conscious, or even artificially conscious -- and that claim isn't disallowed by the Advertising Standards Authority -- then I'll be able to claim strongly that my AC is ''real'' AC. Some implementations that I've considered as candidate ''AC products'' are:
::*A ''conscious'' '''bus stop''' that recognises the people who regularly stand next to it, welcomes them, tells then when their bus is due, introduces them to each other, shares in their complaints about the bus service, answers questions by sending voice-recognised text to Google, plays music, replays horrifying images of muggings that have taken place near it, etc, etc.
::*(Related to the previous) A ''conscious'' '''policeman on the beat''' that collects evidence, helps the public in simple ways like answering questions about the way to the nearest bus stop, and never wants promotion
::*A ''conscious'' '''sexual partner''' that never thinks, never complains, and whose heuristic is designed to ensure increasing satisfaction the more it is used. This is perhaps the most promising idea in that there is already a well-established market for sex toys
::*A ''conscious'' (and articulated) '''computer screen''' combined with webcam, upon which appears an animated representation of a human (which a camera on this or on other instances had previously ''observed'' and which uses morphing software to drive its images), which proactively interacts with its ''user'' (partner?) and takes on personae which are effectively caricatures of people with whom it has interacted previously. (Of course one could buy characters, much like people buy mobile phone ring tones, and people who had used the product to the extent required for the machine to build virtual representations of them could sell their characters. A useful pastime and money-spinner perhaps for old celebrities)
::*A ''conscious'' '''driving assistant''' (already marketed by BMW on its latest model) that keeps your car going in the right speed and direction even when the actual driver becomes inattentive
::*A ''conscious'' '''dinner party host''' that ensures quests' glasses are filled, coordinates the timing of the cooking, makes trivial conversation (AI component required here, perhaps), and interrogates bus stops across the internet to trace late-comers
::If any of these ideas were freshly patentable then they aren't now! They are in the public domain on wikipedia. And I don't think you need a philosopher to tell you whether they are implementable. ] 08:06, 1 May 2004 (UTC)

:::As I said before, the results of scanning the neural activity http://www-inst.eecs.berkeley.edu/~cs182/readings/ns/article.html suggest that awareness is awareness of processes. Human genome project is completed only in the sense that the DNA sequences are mapped, but only a function of a very few fragments of DNA are known. The complexity of that problem is so huge, that it's not feasible to understand everything that way. Neither probably by scanning the whole neural activity. In addition to that, such scanning only shows when neurons fire, but nothing about what happens inside the neuron, but neuron may be as complex as a computer. And even if we could do that, again just having a map would not by itself explain anything. There is also by far not enough information in DNA for all brain activity what even a child has, what shows again that the brain activity is not pre-programmed, but is a result of learning. What we must understand is how that happens, not what *exactly* is in the brain. AC is useful for providing an experimental method to test philosophy. I thought just about a regulator at first, what could model the processes based on the input and predict how they would develop. A natural processes, such as friction of the tyres of your car, what depends on so many things, and cannot be uniformly modelled. ] 15:54, 1 May 2004 (UTC)

:"What makes your project not weak AI?" "Objective Less Genuine AC" should do that. Well, "Genuine AC" should do that as well. Not sure though how Paul interprets it, and why he said that thermostat can considered to be "Genuine AC", concerning that you should ask him, I don't support "Genuine AC" because I don't think that it would be ever possible to model consciosness completely. ] 09:40, 28 Apr 2004 (UTC)

You use ''project'' and ''expert'' in a restrictive way which suits your argument. Given that I agree with what you say. Please do not misunderstand me: I never said the problems are solved and now all we need is enough ] to construct Sentient Expert Mk 1. ] 11:13, 27 Apr 2004 (UTC)

:The 'you' in my first paragraph was directed at Tk, and my comments about ''projects'' and ''experts'' was directed more at him as well. I mostly agree with you (PB) when you said that there ''are'' strong AI projects, etc., but I was addressing the restrictive sense of ''project'' as in an engineering project, which was what I took Tk to mean. ] 00:35, 28 Apr 2004 (UTC)

Obviously there isn't a Strong AI project like the Boeing 7E7 ''engineering'' project.
How many research grants, how many funded scientists does there have to be before a '''project''' is admitted? There are, of course, many Strong AI research projects. And each computer scientist / philosopher of mind engaged in Strong AI has a '''theory''', or pretends to have one, to get his/her funding. The theory is not established like, for example, Special Relativity but to say there is no theory (or rather no set of candidate theories) is just to redefine the word. ] 11:40, 27 Apr 2004 (UTC)

:Yes, there are engineering projects that are created for a purpose other than making something useful for humans. But calling them 'strong AI' projects makes it sound like there are conscious robots out there. However, people have created robots that act like insects that can move around autonomously, etc. ] 00:35, 28 Apr 2004 (UTC)

Upon reconsideration, I agree that AC should remain a separate article and should not be merged with AI nor moved to Strong AI. ] 03:20, 28 Apr 2004 (UTC)

== The concept of machine ==

Daniel Dennett compared mind to a machine, but the problem is that he never said what he means by machine (if you find where he did that, then please say). Machine is usually interpreted as something human-made or something what humans can make. Then there are virtual machines, like Turing machine, what are theoretical concepts, what cannot always be made, like Turing machine with endless tape. or we cannot implement them because it's impossible to obtain all necessary information to implement them. So machine and virtual machine are not the same. We can say that mind is equivalent to certain virtual machine what satisfies Church-Turing thesis, what it certainly is, but this is not the same as to say that mind is a machine. We may call such theoretical machine a Church-Turing machine, to name it somehow. Then we may say that mind is some kind of Church-Turing machine. We cannot follow the logic of Dennett until we don't know what Dennett meant by machine, especially when the question is '''exactly''' in the difference of consciousness and a machine. So this is a theoretical problem, I hope it's understood. ] 11:41, 28 Apr 2004 (UTC)

----

Fortunately for Dennett the Church-Turing thesis says that all machines are equivalent in computing ability except for speed and memory capacity. So Dennett does not have to say what type of machine he is talking about. One idealised computing machine, the Turing machine, is shown by Turing to be equal or superior in capability to other computing machines. A Turing machine is of course implementable (and there are examples on the web) except for the infinite memory size. But any program which runs in finite time can only access a finite amount of tape. The human being has a finite life time and so doesn't need an infinite tape. According to the Church-Turing thesis, any computer can mimic any other. The unavoidable conclusion is: If your Sinclair ZX-80 cannot mimic the human being replace the C30 cassette tape with a C60. Of course, there is also the problem of writing the software. ] 12:02, 28 Apr 2004 (UTC)
So, are you saying that humans are not machines? That there is something spiritual about them: a soul or a magic spark? Or are you questioning the Church-Turing thesis? Or are you questioning it's applicability to humans on the basis of new physics? Or are you saying that the software cannot be written? ] 11:46, 28 Apr 2004 (UTC)

:Are you saying that humans ''are'' machines? The argument for strong ] goes:
::(1) Given that the mind is the software/hardware brain, and
::(2) Given the ],
::(3) The possibility of Strong AI must be accepted.
:I do not disagree with (2), but (1) is questionable. --] 02:37, 5 May 2004 (UTC)

::Yes, that humans are but machines is a very common view amongst many/most scientists and many/most philosophers. Even Penrose agrees with (1) but he disagrees with (2). See ]. But the point is not whether you or I agree with either (1) or (2) and of course you can logically disagree with either but that '''if''' you accept them both '''then''' the conclusion (3) is unavoidable. ] 02:48, 5 May 2004 (UTC)

:::Many/most scientists and philosophers believe in some form of materialism, but I don't think many/most believe that the mind is a machine qua '''symbol manipulation'''. ] 04:03, 5 May 2004 (UTC)

::::If I change "many/most" into "many" then the point is incontrovertible: Many scientists and philosophers do believe that the hardware/software brain is a computing machine.

:::My point is that the mind is not necessarily a virtual machine. ] 04:08, 5 May 2004 (UTC)

::::If by "virtual" you mean not-real then I agree with you. But I think we disagree because many do believe the mind is really the software/hardware machine. That I strongly believe so is hardly pertinent! Your position is, of course, not an unusual one: Most religions are on your side. ] 11:43, 5 May 2004 (UTC)

What??! By virtual machine I mean a ] or ]. The mind may not be analogous to concepts of software/hardware, because it is controversial that cognition consists of merely symbol manipulation—symbol manipulation as in the reading and writing of discrete tokens such as 0 and 1. In the ] framework, cognition isn't transformation of discrete symbols, but rather patterns of activity. ] 14:02, 5 May 2004 (UTC)

I think you may be redefining "]". The view you express, if backed up by appropriate citings, should go in the article. But others think (and I reckon I can find references to back this up) that "patterns of activity" is just like the flashing of lights on the panel of a 1970's computer. What is going on, underneath the patterns, is symbol manipulation. ] 22:22, 5 May 2004 (UTC)

:"Virtual machine" I took from Tkorrovi's post, the first one in this section. Connectionist nets are modelled after ]s in the brain. The firing of one node activates certain other nodes in its vicinity, depending on the connection strength between it and a neighboring node, which is supposed to simulate how neurons work. In ]s, I believe connections are given numerical weightings, e.g. 0.55. I think the numbers aren't supposed to be discrete, but are theoretically supposed be ]s (i.e. continuous). If you accept this model, cognition cannot be symbol manipulation, because there are no discrete ]s like either 0 or 1, but rather, it uses the set of real numbers (of which there are infinite, and have no discrete divisions). Hence, if you accept this model, the mind cannot be a Turing machine. ] 00:12, 6 May 2004 (UTC)

::I think the use of "virtual" in this context is a red herring. As a software professional, I have always understood a virtual machine to be an implementation of a machine within another machine, e.g. the Java Virtual Machine within the Windows operating system (machine). Now whilst it might be worthwhile to think of "mind" as a virtual machine machine within the "body" machine, I do not think that is the intention here and I don't think that analogy holds anyway. It is more likely, I think, that Tkorrovi looked up "virtual" in the dictionary and thought that it would be useful to describe mind as "a bit like a machine", or "virtually a machine", which is of course misleading anyway, as we are aiming to indicate whether mind is a machine at all, and as Wikwikifast points out, from a scientific materialist perspective, it is. The salient question, as Wikiwikifast again correctly identifies, is to ask what sort of machine the mind is, because that will throw light on whether it can be emulated using the methods available to us. ] 06:49, 6 May 2004 (UTC)

:::"Virtual machine" is another of those compound nouns the meaning of which can be perfectly well understood from each of the constituent words. It also has its own ]. The first usage of that term in this section is incorrect. ] 09:39, 6 May 2004 (UTC)


Do real numbers exist in nature? This is an interesting philosophical question. Even distance, we are told by the quantum theorists, is discrete. Maybe only rational numbers exist. Certainly our vision is digital and discrete. The pixels are close together and we do not see pixellation - there must be smoothing happening in software - but the retina consists of discrete rods and cones. In the connectionist nets the values assigned to the nodes are not chosen from the whole set of real numbers but from a subset of the rationals: This is a proven limitation of all computers. As the firing of a neuron in the brain must involve an integer number of electrons we also know that not all real numbers can be represented at the synapse junctions. The neuron cannot be set to fire after the transfer of a fractional number of electrons. The brain does not process real numbers. Brain processing is chemical and electrical. Everything is a multiple of a number of molecules or a multiple of a number of electrons. Brain processing is discrete. ] 01:17, 6 May 2004 (UTC)

See ]. Is the Universe computable? ] 01:35, 6 May 2004 (UTC)

:You have a good point. However, I just wanted to point out that not everyone believes in symbolicism.
::'' '''''' - An approach to understanding human cognition that is committed to language like symbolic processing as the best method of explanation. ... The commitments of symbolicism have been challenged by research and most recently the approach to cognition.'' -
:See also ]' '']'' for an argument against the claim that the mind is a Turing machine. Instead of debating incessantly over talk pages, I should probably use my time constructively by contributing to the AI and cognitive science articles... ] 02:38, 6 May 2004 (UTC)

:::As to what type of machine the brain is, this is settled for most scientists. Turing showed that anything which does symbol manipulation is no more powerful than the Turing machine. And anything passing about discreet lumps of stuff (e.g. electrons and molecules) is doing symbol manipulation in the computer science sense. It is a finite state machine. All finite state machines are no more powerful that the Turing machines. Proven computer science result. It's maths. No argument. The onus is on the dissenters to explain what is wrong with any of this reasoning. Penrose understands this and attempts to do so, but established science finds serious flaws in his arguments. But Penrose understands what it is he has to show, most other dissenters ignore the ]. That is what is happening here. ] 09:39, 6 May 2004 (UTC)


::Cognitive science article would indeed be a better place for that. This is not necessary for AC considering the Thomas Nagel argument that subjective experience cannot be reduced (modelled), and this is included in the article. Considering that, it doesn't matter whether subjective experience is physical. I think it's physical and Nagel thinks that it's physical, but this doesn't matter for AC. ] 08:59, 6 May 2004 (UTC)

:::If the mind is no more powerful than the Turing machine then what accounts for subjective experience? A magic spark? The soul? Penrose quantum quackery? Maybe Nagel is wrong. ] 09:45, 6 May 2004 (UTC)

It (that the mind is a machine) does matter for genuine/true/real/strong AC. Obviously. ] 09:39, 6 May 2004 (UTC)

----

All machines are equivalent in computing ability except for speed and memory, but there is a difference between virtual (theoretical) machine and man-made machine also in that it may not be possible to obtain all necessary information to make the machine, in spite that theoretically some type of virtual machine can implement it. Please read Thomas Nagel "What is it like to be a bat" to get an idea. ] 12:19, 28 Apr 2004 (UTC)

I have read the book*. I have demonstrated the equivalence of the Sinclair ZX-80 and the human being (except for speed and memory capacity). Neither of these are theoretical machines. Trying my best to interpret what you have said I think you have chosen the last of the presented alternatives: You think the software cannot be written. ] 12:28, 28 Apr 2004 (UTC)

:(*)I have read the book "The Mind's I" in which Nagel's ideas are quoted at length and the bat essay may appear in it's entirety. The critique of those ideas which appears in that book and especially how they are contrasted with other ideas is interesting and entertaining. But, I suggest, not entirely relevant to the narrow point we are trying to resolve here. ] 12:43, 28 Apr 2004 (UTC)

::Yes it may be not entirely relevant. It is said in the article ] that "While many philosophers of mind and cognitive neuroscientists accept the fundamental distinction between the subjective and the objective, they often have not accepted Nagel's dismal conclusions." This is almost that what is relevant for AC. But concerning dismal conclusions, maybe Nagel is not so well understood, he doesn't deny the reduction of that what is objective. ] 12:56, 28 Apr 2004 (UTC)

Whether written or taught, we cannot make sure that it can be developed so that it implements consciousness '''completely''', but we can develop it so that it implements consciousness '''partly''' when we don't omit anything, except that what is not objective. ] 12:39, 28 Apr 2004 (UTC)

OK, I cannot deny that view but that is just one way things might work out. At some point the built-consciousness might reach a critical mass, transfer itself to a local super-computer cluster of 100,000 Trituim 86666MHz processors running MacOS XII, attach several Terabytes of NAS memory and, changing gear, evolve a consciousness which is a superset of human consciousness. But, please, neglect this flight of fancy for the moment. I return to your view: Will your partially implemented consciousness be a genuine manifestation of consciousness or not? And, if not, why not? ] 12:52, 28 Apr 2004 (UTC)

:The problem is that whether it is genuine consciousness or not can never be found out. Because of that, as science must be objective, in scientific terms we must say no. But there is a possibility that it may seem one day very genuine, especially if maybe implemented on quantum computer, maybe even exceed some human abilities, in theory there seems to be nothing what restricts such system. And then one day we may believe that this is genuine consciousness, but there would be no scientific way to find out, all what we can test even in that case would be that it is artificial consciousness. ] 13:13, 28 Apr 2004 (UTC)


::But, as an aside, and this is another fact to trip up Penrose, the Church-Turing thesis still applies to the ]. ] 13:47, 28 Apr 2004 (UTC)

:::Sure, I also think that Church-Turing thesis apply, and I think it's not Gödel's theorem what is wrong, but rather the way how Penrose uses it. This question has been discussed endlessly in the Internet. ] 20:59, 28 Apr 2004 (UTC)

Dennett deals with this issue well in both ''The Intentional Stance'' and ''Consciousness Explained''. Essentially, he denies the human any special place in the universe, he says we allow another human consciousness because he says he is, and says it is mere arrogance to deny something else consciousness if it acts as if it is conscious and if it claims it is conscious. He makes the point better than I do, doubtless. ] 13:24, 28 Apr 2004 (UTC)

As I understand it he is saying your use of subjective and objective you fail to apply to humans themselves. Subjectively you say you are conscious. Objectively, how do I know? ] 13:26, 28 Apr 2004 (UTC)

Yes, the only reason why we can say that we are conscious is that we are humans, and capable humans are considered to be conscious when they don't happen to be in coma. We cannot even completely compare our consciousness to that of somebody else. This doesn't mean that we are better, this is just how we determine consciousness. Maybe dolphins are better than we are, but we can never know. ] 13:40, 28 Apr 2004 (UTC)

Except that dolphins act as if they are conscious, they do all but ''say'' they are conscious. I have seen a conscious dolphin, I am sure. Neither you nor I can be sure that the other is conscious: Indeed, for all you know I have tapped this out on a panel in my tank. <nowiki>]</nowiki> ] 13:47, 28 Apr 2004 (UTC).

===Counter-example===
] describes in his book ''The Man Who Mistook His Wife For A Hat'' the case of autistic-savant identical twins who were able to compute large prime numbers within a timescale that was several orders of magnitude less than what would be required by the most powerful computer to perform the same feat . These twins were not articulate enough to describe the methods they used, and I think their ability remains a mystery twenty years later. They certianly did not have the mathematical ability to compute prime numbers using any standard algorithm such as the sieve of Erastothenes - they could not even do simple multiplication. Such pathologies as Sacks describes raise questions about the operation of the brain that could possibily be used to counter the Church-Turing thesis, or at least to question it until a viable explanation had been found. The twins in Sacks' story seemed to have a particular ''consciousness'' of primeness (and not much else besides) and savoured each new prime number they encountered as one might savour a newly discovered vintage wine. Sacks mentions a number of examples of enhanced and deficient mental abilities that surely throw some light on our conception of consciousness, and which perhaps should be taken into account in our discussions here. ] 07:13, 6 May 2004 (UTC)


But you invent magic. We do not know how we remember the spelling of "Erastothenes" but we do not invent magic to explain it. The scientific method does not say "invent science to support your prejudices". ] 09:34, 6 May 2004 (UTC)

Experiments on primates have shown that there is an area of the brain concerned with visual cognition that does pattern-matching and which, interestingly, operates at a level of resolution (topological complexity, to be more precise)commensurate with alphabetic and other writing system symbols (e.g. Chinese). This is used to show that our brains were not designed for reading, but that we have innate capabilities (shared with other primates) that lend themselves to be able to interpret writing. So our spelling ability is capable of being understood at an elementary level. However the recognition of prime numbers is held to be a difficult feat to perform algorithmically, and is indeed the basis of modern cryptography. Therefore if we cannot explain how certain individuals' brains can ''divine'' prime numbers, surely that is a good argument to suggest that the Church-Turing thesis may not hold in all circumstances when it comes to understanding consciousness, as I suggest above. If we ''could'' explain it then there is the possibility that cryptography would founder. ] 09:52, 6 May 2004 (UTC)

Firstly, you overstate the ability of the idiots savant. They could not do '''any''' prime number problem. Related problems, e.g., which a mathematician would be able to work out if they knew what the twins knew, stumped the twins completetly. You have managed to remember and store away a large vocab, to store away a fairly detailed map of London, to remember many facts about scores of people. These idiots savant did none of that. The storage capacity of the brain is known to be enormous. Yet I can store a lot of prime numbers on my laptop. No one is saying that the brain is not remarkable. But no new science should be invented until necessary. It is difficult for people to appreciate they are not God's chosen ones. We '''feel''' special. A more plausible explanation than the brain not being finite state sub-Turing is that it's drugs in the water that make you think this way. No new science required for that. ] 10:03, 6 May 2004 (UTC)

A much better counterexample is the finite state machine known as Beethoven. ] 10:06, 6 May 2004 (UTC)

::Please would you provide a reference to Beethoven in this context, unless you mean the composer. If the latter, why should the ability to produce musical patterns be cited as a counter-example for the Church-Turing thesis in relation to mind? ] 10:38, 6 May 2004 (UTC)

:::I agree with you! Beethoven, even though he was much more impressive than Sacks's twins, is no evidence for humans being more than FSMs. ] 10:47, 6 May 2004 (UTC)

In the story that Oliver Sacks tells (and he was a neurologist invited to examine the twins), at first no one was aware that the twins were savouring prime numbers. He listened to them, noted down the numbers they mentioned, and took his notes to a mathematician who indicated that they were all primes. Sacks then wrote down some more primes - bigger ones - and re-joined the twins. He mentioned his primes and the twins paused to consider them. They had never encountered such large primes and were delighted with the new player (Sacks) in their game. I forget how large they went in terms of the numbers mentioned, (and the twins did take longer to recognise larger primes) but I think it's fair to say that it's unlikely they had memorised a list previously presented to them. They also, incidentally, had the ability correctly to state (without delay) the day of the week of any date in the previous or next 10,000 years. I doubt whether they'd had the chance to memorise such a large number of calendars. Whatever the explanation, I don't think it's one of them having an eidetic memory, though some idiot savants do have this. I also don't think their ability was learned - the fact that they were identical twins being important here. ] 10:30, 6 May 2004 (UTC)

:Identical twins important? Did you forget to provide this ]? My point being that even a cluster of computing machines is no more capable except in speed and memory capacity than any other computing machine. ] 12:00, 6 May 2004 (UTC)

If you know all the prime numbers to 1000, say, then it is not super-humanly difficult to test numbers to 1000000 for primeness. Not magic. The calendars might have impressed Oliver Sacks the mystic, but it would not have impressed Oliver Sacks the mathematician. You only have to learn 14 calendars to know them all. The magic feat is simply one of dividing modulo 14 and applying an offset correction for Pope Gregory. I can tell you whether any number of any size is divisible by 11. But then I am not like you machines. I take Super Unleaded. ] 10:45, 6 May 2004 (UTC)

Perhaps I am digressing, but it occurs to me, from reading Sacks and others that part of our ''normal'' humanness consists of constraints (attenuators) on our consciousness. See ] (also relevant in the autistic savant context), and remember Sacks' example of the man who smelt like a dog (i.e he could smell water at a distance) temporarily after experimenting with drugs. I am primarily concerned here, though, with the example of primeness. It is not easy to test a number for primeness in your head, even with relatively small numbers. See http://primes.utm.edu/prove/ for the known methods of proving primeness, and bear in mind that the twins couldn't do simple arithmetic. What I think they demonstrate is that there ''was'' some mysterious method of determining primeness (perhaps by means of some form of pattern matching and/or symbol manipulation) which was not learnt (the only reason why the twins are ''important'' is that they had the same genotype which gave them each the same mysterious capability) and was not explainable by them - they didn't know how they did it, i.e. they didn't do it by any known mathematical/algorithmic process, and it couldn't be fathomed by anyone else, and still can't, as far as I am aware. ] 18:38, 6 May 2004 (UTC)

===Brain size===

In terms of learning ability, I am reminded of Richard Feynman's calculation that if you were to encode alphabetic symbols at the atomic level, using 100 atoms per character, and build a block of material encoded in such a way, then the entire content of all the world's reference libraries could be stored in a piece of matter (I think he said metal, actually) as small as a grain of sand. I don't know what the volumetric storage requirements of biological memory are, but assuming they are not more than an order of magnitude different from Feynman's model, then one could theoretically perhaps store the entire contents of the Internet in a small corner of one's brain. What this makes me wonder is whether there might be some kind of critical mass of grey matter below which consciousness is impossible, and that therefore there are physical constraints on artificial implementations using current technology. If this could be demonstrated then one might come up with a calculation that one would need a disk platter as large as the rings of Saturn, or perhaps as large as the orbit of Pluto round the sun, to store enough data to make consciousness possible. How many neurons are there in a typical brain? I think we should be told. ] 18:38, 6 May 2004 (UTC)

The unit of biological memory has to be the ] or the synapse because the reading and writing of material within the cell cannot be done in seconds. A neuron is huge when measured in numbers of atoms: 25 microns = 2.5*10^(-5)m. Feyman's info density packing will be based on the size of an atom: 2.5*10^(-11)m. 10^6 difference in linear dimension.
But to your question: The volume of a ] is 1600cc or 1.6litres or 0.0016m^3. Linear dimension of a neuron is 2.5*10^(-5)m. Volume is 1.6*10^(-14)m^3. At maximum packing density there are 1.6*10^11 neurons in the brain. The ] tells us 100bn neurons. 100*10^9 = 10^11. I was spot on: super unleaded! Is that 100Terabits = 10Terabytes? ] 23:45, 6 May 2004 (UTC)

===Brain as Finite State Machine===

Recipe for recognising the true nature of the brain: Define ]. See that brain conforms to definition. Acknowledge the proof that finite state machines are ]s (or, more accurately, that FSMs are no more powerful than a TM). Acknowledge that the ] holds that all computing machines (FSMs, Pentium IVs, etc) are equivalent in capability except in speed and memory capacity. Adopt the true faith. ] 10:39, 6 May 2004 (UTC)

I ought to quickly acknowledge my recent discovery that FSMs as formally defined ] (at least) are less powerful than TM's. Qualitatively, at least, my argument stands. ] 12:15, 6 May 2004 (UTC)

==Related Topics==
*]
*]
*]
*]
*]
*]
*]

I think AC is most directly related to ], other fields come from that (intelligence->artificial intelligence), artificial life and digital organisms should be related to biology. BTW thereis ] also, its somewhat unclear, should it be synonum to consciousness? ] 20:35, 2 May 2004 (UTC)

And ] ] 20:41, 2 May 2004 (UTC)

==People==
*]
*]
*]
*]
*]
*]
*]
*] <-- article required
*] <-- article required
*]

== Thermostat ==

Any conscious entity which does not appreciate the thermostat argument must have a screw loose. I suggest that we return it to its manufacturer as fatally flawed and ask for our money back. I would not be happy with a repair. But I worry that anything which is as broken as that is bound to be well beyond its warranty period. ] 00:00, 3 May 2004 (UTC)

What argument? David Chalmers didn't argue that thermostat could be considered conscious, state clearly what argument you are talking about. ] 01:18, 3 May 2004 (UTC)

:Once again you make a wild assertion stated as if it is bold fact and it is wrong. To do this again and again, as you do, is fundamentally a dishonest way to proceed. ] 12:05, 3 May 2004 (UTC)

And if you are not competent in AC or consciousness studies, then give up. ] 01:28, 3 May 2004 (UTC)

:Absolute competence in AC studies I am not claiming for myself: These things are relative, of course, so competence is what I seem to have in relation to some others. Competence in writing an encyclopaedia requires an interest in the truth, an ability to understand English, the willingness to read others' '''competent''' research, the willingness to maintain an open mind, to not press one's own view in defiance of the facts. People who live in glass houses. The task at hand here is to write an encyclopaedia. ] 12:05, 3 May 2004 (UTC)

:A search at Google for "chalmers conscious thermostat" gives this result:

::David J. Chalmers in The Conscious Mind: In Search of a Fundamental Theory. OUP,1997: ''Someone who finds it "crazy" to suppose that a thermostat might have (conscious) experiences at least owes us an account of just why it is crazy. Presumably this is because there is a property that thermostats lack that is obviously required for experience; but for my part no such property reveals itself as obvious. Perhaps there is a crucial ingredient in processing that the thermostat lacks that a mouse possesses, or that a mouse lacks and a human possesses, but I can see no such ingredient that is obviously required for experience, and indeed it is not obvious that such an ingredient must exist.''

:That Tkorrovi repeatedly misrepresents the facts is well established. What would now be interesting would be to review '''all''' his contributions as I think we might find they are equally questionable. ] 12:05, 3 May 2004 (UTC)

----

I don't know, maybe you are right. David chalmers wrote in the article referred above "A thermostat, or indeed a simple connectionist network, as a model of conscious experience? This is indeed very surprising. Either there is a deep insight somewhere within Lloyd's reasoning, or something has gone terribly wrong." And from the interview :

"TT: So you're talking about this double-aspect view of information, (the idea that all instances of information processing, even simple ones, give rise to some kind of subjective experience - a sort of panpsychism though Chalmers is wary of that term.) In your book this led to questions like "What is it like to be a thermostat?"

DC: (laughing) Right, yeah. This is all very speculative of course."

So his statements are indeed highly controversial. So it's not me who misrepresents the facts or lives in the glass house, if anybody then it's Davis Chalmers, and you included an argument by him in the article, I didn't refer to Davis Chalmers before. And all this panpsychism and pseudoscience has nothing to do with artificial consciousness, I don't know why you want to include it into article. We cannot artificially make the "fundamental" consciousness Chalmers talks about, what would be as fundamental as space and time and connot be explained by other physical processes. I deeply disagree with that. But I did like the way Chalmers argued against the connectionist view of consciousness in the article referred to above. ] 16:46, 3 May 2004 (UTC)

----

Of course they're controllversial: The whole subject is controversial and speculative. And it is a result which he finds surprising but pleasingly so and which he cannot discredit. But tkorrovi's point was that Chalmers did not say something that he did indeed do. He stated this vehemently as if he had checked and removed the Chalmers' point from the article. ]

My point was based on one argument of Chalmers, by what he clearly didn't consider thermostat consciou. You seems to agree that his arguments are controversial, so it's not my fault if another argument contradicted it. But as you think that it is controversial, why you included it into article then, stating it there as if it was certain? ] 17:48, 3 May 2004 (UTC)

:'''Tkorrovi caught out again in another barefaced lie. If he can twist the facts to support his view he will. I did not insert the thermostat argument as if it were certain fact. I wrote (in a section discussing various schools of thought): "Some believers in Genuine AC say the thermostat is really conscious".''' ] 15:06, 4 May 2004 (UTC)

== What dictionary ==

If we want to use free dictionary what also remains free (is under GPL licence), then we should not use dictionary.com but GCIDE , it includes entries from both public domain 1913 Webster and 1997 WordNet. dictionary.com searches GCIDE and also some proprietary dictionaries. ] 01:20, 3 May 2004 (UTC)

== What and that==
There is benefit in using a dictionary (any dictionary, but a learner's dictionary in particular) to discriminate between the usage of ''what'' and ''that''. One of the benefits of humanity is that people (or at least some people) are able to learn languages. Some people, unfortunately, never master this art. ] 13:01, 3 May 2004 (UTC)

:As I remember, you were the one who suggested to use free dictionary, and was so vehemently against using Concise Oxford Dictionary. Did your opinion change meanwhile? Why free dictionary is better than just any dictionary is that it is available to everyone, this avoids confusion of referring to different dictionaries. This is advised in Wiktionary as well. ] 16:24, 3 May 2004 (UTC)

The problem is that we must be much more precise here than just what an that. ] 16:49, 3 May 2004 (UTC)

The word is "and", not "an". We will continue using the best reference material available. ] 17:26, 3 May 2004 (UTC)

:You act like chatbot what cannot understand that a mistake was made just by not pressing a key hard enough. ] 18:01, 3 May 2004 (UTC)

It's not "what" but either "which" or "that". Press those keys harder. ] 18:18, 3 May 2004 (UTC)

Ask Matt Stan then why he wote "what and that" and not "which and what" ] 18:26, 3 May 2004 (UTC)

I do not need to: He was referring to an error you made confusing the correct usage of "what" and "that". I refer to a later error, above, where you should have used "which" or "that" instead of "what". Oh, and you missed out an "a". ] 18:32, 3 May 2004 (UTC)

What you exactly want to say and why is it important? ] 19:09, 3 May 2004 (UTC)

It is important, as Tkorrovi indicates, with a controversial topic, to report accurately what the proponents' arguments are. In reporting accurately, it is helpful to maintain correct usage of the language concerned. People reading an article won't be so impressed if they think the writer is illiterate. Misplaced Pages is very forgiving in this respect because those who know correct usage can come in and put an article right; so perfect writing style is not a requirement in the first instance. However, when someone repeatedly makes the same mistake, in this instance a seeming confusion of usage of certain prepositions and relative pronouns, then I don't think it out of order on a talk page to point this out. What is interesting here is that, rather than going away and learning the correct usage, the object of my criticism seems to want to argue about what I meant when I wrote the heading to this section. C'est la vie! ] 08:44, 4 May 2004 (UTC)

:To illustrate, the sentences 'I know what you wrote correctly.' and 'I know that you wrote correctly.' are both gramatically correct, but mean different things. The first indicates that I know something and you wrote it correctly; the second simply that I know about the correctness of what you wrote (regardless of whether I know about what you wrote about). ] 08:44, 4 May 2004 (UTC)

== Matt Stan falsely accused by Tkorrovi ==

'''Matt Stan, with what right you deleted part of my post without even saying anything?''' ] 17:09, 3 May 2004 (UTC)

::I deleted nothing that you wrote. If you read what is there, in the comparison URL given above, you'll see that I just broke your long paragraph up into sections so that I could answer the points separately. But your paranoia seems to preclude your being able to understand what I wrote or answer the points that I make. Why is that? I think the Russell quote is particularly apt in this context. ] 08:19, 4 May 2004 (UTC)

And I have a suspicion that this was also done against me before. Reading the archives I didn't find some posts what I remeber I wrote. But I have not all time in the world to search through history to confirm it. Is this an accepted behaviour by people who supposed to talk about science? ] 17:19, 3 May 2004 (UTC)

The example you give in the 1st para is not evidence of what you allege. You follow this up with another allegation of which you present no evidence. You are a dishonest troll, tkorrovi. Please go away. ] 17:23, 3 May 2004 (UTC)

No, as I did show the evidence that my post was indeed secretly changed, then this is not dishonest allegation, but substantiated suspicion. If you indeed came to make jokes here, then please do that in some more proper place, the problem is that your jokes here are not well understood by most people. I will not go anywhere, because I am honest man, and honest man has nothing to afraid. ] 17:31, 3 May 2004 (UTC)

I asked this on Matt Stan talk page also, no reply yet. A comment last written on that page was "Anecdotalise from an irrelevancy on the artificial consciousness talk page" The only way to argue is by correct arguments, if you don't want it, please go away and let people seriously talk here, and write a good article. ] 17:42, 3 May 2004 (UTC)

I followed the link Tkorrovi provided in the first para. Every word he wrote remains. Use the scroll bar. ] 17:44, 3 May 2004 (UTC)

Then don't write replies in the middle of the posts, some way it caused the rest of my post to appear on single line, so that it couldn't be read. Don't know whether it was intentional or not, but you see that your additude and vandalism here cause the suspicion of the worst case. ] 17:58, 3 May 2004 (UTC)

Your response is not appropriate. Think: What response whould you expect if you have been wronged as you have now wronged Matt Stan? Make that response. ] 18:05, 3 May 2004 (UTC)

Yes it was appropriate, stop joking here and become serious. ] 18:11, 3 May 2004 (UTC)

You lack honour. Your other allegation remains here. You have not withdrawn it. Give examples or withdraw that too. ] 18:16, 3 May 2004 (UTC)

== "Part of my post was made unreadable", says Tkorrovi, blaming his tools, "by Mozilla" ==

Part of my post was made unreadable ] 17:09, 3 May 2004 (UTC)

If you look at the "What should be in the AC article?" section, then you see a post what is on a single line, at least I see it with Mozilla. ] 18:23, 3 May 2004 (UTC)

I use Mozilla as my browser. I do not have this problem. And if I did I would use the '''horizontal''' scroll bar before I accused others of deleting text. ] 23:19, 3 May 2004 (UTC)

== Tkorrovi apologizes and propose to forget the issue ==

OK, I'm sorry, I admit that I made a mistake and deleted the dispute, proposing to forget the whole issue, but you don't want to stop. Part of my post indeed appeared on a single line, I don't know what technical problem may cause this, but I made mistake and didn't notice that line, I must be more careful in the future. We all make mistakes, I think you admit that you also sometimes make mistakes. ] 23:34, 3 May 2004 (UTC)

Whatever, if Matt Stan now says he's happy then I won't revert if you delete the section. ] 23:42, 3 May 2004 (UTC)

==Title==

Titles such as ] are not appropriate for Misplaced Pages. If you want with that sort of title, try ]. I've listed it on ]. ]] 23:47, May 3, 2004 (UTC)

== NPOV ==

The article includes all views whatever are proposed, for and against, what still doesn't satisfy you Paul, why you still insist that article is not NPOV? ] 23:57, 3 May 2004 (UTC)

::NPOV isn't the be all and end all of articles in wikipedia. Sure, when a view is expressed, it should be expressed in such a way as to acccmmodate a different view. But that isn't the same as saying 'anything goes (provided one expresses it couched in NPOV terms)' People come to an encyclopedia expecting to obtain ], not ]. The discriminant between knowledge and patent nonsense (and the shades of indeterminate truth in between, i.e. pseudoscience) is the academic establishment and the institution of peer review. How can I make a judgment about, say, the reality of ] unless I am aware of the claims and counter-claims about it? The same must surely apply to machine consciousness. There is also a tradition in wikipedia of removing patent nonsense. Who judges what constitutes patent nonsense? Why, we the wikipedians, who ourselves provide the ultimate peer review, ultimate because it is not restricted to subscribers to academic web sites. What is the arbiter to help us decide whether something is patent nonsense? Why, it is whether or or not the view expressed on a scientific topic is expressed properly and is itself backed up by academic references. If not, then any wikipedian can refute that view and, if need be, remove it from an article. The question that remains then is, "Is artificial consciousness supposed to be a scientific article or a pseudoscientific article?" Again the convention in Misplaced Pages is that an article should state at its outset its terms of reference. Rather than putting "The neutrality of this article is disputed", perhaps it would be better for the article to start with "This article covers the aspects of machine consciousness that are not backed up by scientific research. For a more rigorous scientific treatment, see ] (or perhaps link to a section within the ] page. ] 08:54, 10 May 2004 (UTC)

You know not what NPOV is. Views with which you disagree have now had flawed criticisms applied to them by you. If Nagel says it, then it is Gospel truth as far as you are concerned. Anything anyone else says is not allowed by you without you applying what is often an unfair reading of Nagel to it as cirticism. You refuse to enter into logical argument and you seem unable to. As has been demonstrated, you continue over and over again to state as fact that which is not true. You will not even allow your English to be corrected. It seems Matthew (Matt Stan) has left not to return, you must really have annoyed him. I will also stop contributing here (it is too much hard work to keep you honest) but if I do I will ensure the NPOV line remains. You are not an asset to Misplaced Pages. You only contribute here and you wind the rest of us up. You should have a look at other articles Matt Stan contributes to as a lesson on how to contribute to Misplaced Pages. You are a troll. That is why. ] 00:10, 4 May 2004 (UTC)

Tkorrovi, I think you should read the info from Angela above. Then you can do your own AC article ] where your own distinct view can be propagated. Don't forget to quote your qualifications and experience as it says you should. ] 00:32, 4 May 2004 (UTC)

If something is not undisputed fact, or other theories question it, then it must be explained, what is wrong in that? This page and archives are full of logical argument by me, you and others, even if we exclude all unnecessary personal attacks, too much to say that I refuse to enter into logical argument. What is that what I continue over and over again to state as fact that which is not true? I am honest and I am not a troll, couldn't you avoid to call me so, when you came here to make serious contributions to Misplaced Pages, then why you offend and ridicule others?

No, I started this article in Misplaced Pages, I want to contribute to NPOV Misplaced Pages article. The different views are important, and it to be written so as others see it, not just myself. I may write my own articles, or may not, this is a separate issue.

It's not clear what doesn't satisfy you. Either you don't like the theory of Thomas Nagel to be mentioned, or you don't like any edits by me. As such your requirements cannot be satisfied. All views are there, so article is NPOV, and you as anyone else has a right to edit if there is something in particular what you consider wrong or want to make better. ] 00:58, 4 May 2004 (UTC)

I refer readers to the numerous examples to be found in this page and the archives thereof. ] 01:02, 4 May 2004 (UTC)

Of course, what I wrote in the article, I explained somewhere on this page or the archives. Paul did it also, and there were things in what we did agree. It could been a very good discussion if here was enough respect to each other, just an elementary respect to other's humble personality. ] 01:15, 4 May 2004 (UTC)

But Tkorrovi's persistent trolling and dishonesty destroys any good will which arises from time to time. ] 11:35, 4 May 2004 (UTC)

Because I'm persistently treated like that for a long time, I think that this is a plan to discredit me and push me out of here, maybe just to have the honor to be major editors of artificial consciousness. How that can be solved by agreement? They only agree when they themselves drop the plan, but they have no reasons for this while they still can discredit me in the eyes of the others. Now also all links to this article were deleted by these two. But I stay, I have my rights of Misplaced Pages editor as everybody else to edit any article, and I never go away. Even only because I don't allow to violate the rights of people and I don't allow overtaking the articles or gaining power with such methods. ] 14:08, 4 May 2004 (UTC)

::'''''How that can be solved by agreement? ''''' For instance, by going through the questions that have been asked and then answering them, ideally in plain English, and without mentioning yourself in any context, i.e. by keeping it objective, as you allege is good scientific practice. For example, what is wrong with bringing the emotional component of artificial consciosness into the article, as indicated by Igor Aleksander? ] 15:59, 4 May 2004 (UTC)

:::Who was against it? ] 16:03, 4 May 2004 (UTC)

:::Why you didn't link to ] article from ] article what you and Paul recently created, this supposed to be related. This is also not a Misplaced Pages policy to create parallel articles. ] 16:09, 4 May 2004 (UTC)

And he's ]. ] 14:25, 4 May 2004 (UTC)

As you see, he never stops, and has not a slightest wish to agree with me, or even respect me. ] 14:36, 4 May 2004 (UTC)

That is correct. Tkorrovi is worthless troll. ] 14:42, 4 May 2004 (UTC)

== Towards NPOV ==

In accordance with the guidelines, I am placing here statements culled from the article which fall foul of the guidelines at ]. What we do now is repair them into NPOV form here and, if that was possible, replace them into the article. ] 02:17, 5 May 2004 (UTC)


==== Ability to predict ====

''One aspect is the ability to ] the external ]s in every possible ] when it is possible to predict for capable ].''

OK, this is but the first of many. What scholar says this? ] 02:20, 5 May 2004 (UTC)

] states in his paper '''''Artificial Neuroconsciousness: An Update''''' : ''Relationships between world states are mirrored in the state structure of the conscious organism enabling the organism to predict events.'' This is Corollary 5 of his fundamental postulate: ''The personal sensations that lead to the consciousness of an organism are due to the firing patterns of some neurons, such neurons being part of a larger number which form the state variables of a neural state machine, the firing patterns having been learned through a transfer of activity between sensory input neurons and the state neurons.'' Aleksander goes on to say ''Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness. It can be shown formally that prediction follows from a deeper look at the learning mechanism of corollary 4''. This Aleksander article is quite dense and, though its outline thesis is quite straightforward it would seem to require considerable study to understand all his algebra. I would much appreciate a lay person's interpretation and summary of Aleksander's thesis, putting it into context with that of other researchers. Unfortunately the contributors to the AC article don't seem quite to have got a handle on this subject and I am forced to reflect on the Bertrand Russell quote cited elsewhere. ] 09:35, 5 May 2004 (UTC)

:I note that the discussion in which I quoted Bertrand Russell has been archived. What he wrote was "A stupid man's report of what a clever man says is never accurate because he unconsciously translates what he hears into something he can understand." ] 11:00, 5 May 2004 (UTC)

::As a clever man, Matthew, do you think I fairly (as in ] "write for the enemy" exhortation) represent the Aleksander view?

Thanks for the Aleksander quote. But I think that the article has this as it stands. We have no specific support for "capable human" here. I will add the reference to the article. ] 11:07, 5 May 2004 (UTC)

This issue is, I think, resolved. Is anybody unhappy with the new paragraph in the article? ] 11:28, 5 May 2004 (UTC)

Without commenting here, depite the above questioning, Tkorrovi has edited the paragraph. The procedure I followed here, to resolve difficulties with the paragraph, is that layed out in ], ] and elsewhere. The statement that Tk has inserted is practically identical to one we had here before which (i) nobody but Tk wanted and (ii) which Tk could not provide a source for. I am atempting to lead a para by paragraph clean up of this article. This one I thought we had resolved. I am going to revert this particular paragraph to the non-controllversial version. ] 15:26, 7 May 2004 (UTC)

=====what should be in the article=====

:This was an interpretation of the provided source, by ] it is allowed to interpret sources in Misplaced Pages. Also that I was an only user who wanted to write that is not in contradiction with ] and ]. You may say that it was in contradiction with ], but this guide is also in contradiction with ] and ] what allows even new theories with the condition that they should not be given equal importance, as to widely accepted theories. So in that sense we may proceed from main guide, just when it is stated so that it clearly is not a widely supported view. The requirement that the opinion of the small minority cannot be written in Misplaced Pages may be reduced to absurd that no single individual cannot write anything in Misplaced Pages when it doesn't directly come from sources, but interpreting sources in their own way is what most of the Misplaced Pages users do, and allowed to do when they allow other opinions. For AC there is also a problem that it is new field and people who study that are by themselves a small minority compared to AI researchers or other scientific community. So there a single not well known study or researcher means a lot. So I think we should take it reasonably, not the way that anything written by me or you should be deleted just because not exactly that was said by any known scientist. ] 18:22, 7 May 2004 (UTC)

:From ] "...the task is to represent the majority (scientific) view as the majority view and the minority (sometimes pseudoscientific) view as the minority view." This is in contradiction with ]. My statement was just one possible interpretation of the argument provided in source, and not obviously wrong interpretation. But I changed it stating clearly that this is not widely accepted. So it is in accordance with ] guidelines. ] 18:40, 7 May 2004 (UTC)

Thi Misplaced Pages is not a place to publish original research nor is it the place to publish the personal controversial interpretations of the editors. Please just find the "capable human" in an appropriate source and the arguing will stop on that point. Note, however, that NPOV does '''not''' state that every POV be given equal weight. Nor does it say that '''every''' POV has to be represented. If, e.g., Aleksander turns out to be some quack then we might have to tone down his POV. And if the "capable human" point is so outlandish that only one person on Earth thinks it then it need not be included. Why are you so keen on it anyway? ] 19:15, 7 May 2004 (UTC)

:I understand now where comes the controversy between what is stated in ] and in ]. What is said in ] can only be applied when there is some new theory written in the article. This article doesn't describe any new theory, it describes and gives possible interpretations of different views about AC based on different sources. The aim is to provide all human knowledge there is about AC as ] advises, not any full theory. Concerning "personal controversial interpretations of the editors" the ] policy says that "we should fairly represent all sides of a dispute, and not make an article state, imply, or insinuate that any one side is correct." This means that all interpretations should be included. It's obvious that most of such interpretations come from one or another single editor. It seems that by policy it should be said then that "this and this editor said that...", but this would mean emphasizing the name of the editor in the article, what would be an unfair promotion of a small person. Therefore, as Misplaced Pages users should be considered a small persons, none of them should be considered so special that what he says, nobody else could never think. So considering your personal view, only your personal view is somewhat arrogant as well, you only represent one person who thinks in some particular way. Therefore we can only say that certain point of view or interpretation is not widely accepted. Concerning "capable human", how else would you determine a human who is fully conscious, having all what is necessary for consciousness, with enough mental resources for that? Not everybody thinks that any mentally disabled person is conscious. But if you have another approach, then write it, as far as it is not wrong, my view, or similar view by other people, should not dominate. I'm not so keen to write my particular sentence mentioned into the article, most importantly I want it to become clear how writing the article fairly should be done. ] 21:10, 7 May 2004 (UTC)

As you said, we are small people in this field. But we do have an interest in the field. Unfortunately perhaps as many as 1% of all people would be prepared to express an interest in this field! Surely we cannot put everybody's (or 1% of everybody's) opinion here? If I can avoid the arrogance to which you refer then I will stop giving my own opinion overly much importance but I find that difficult. I hear you saying something similar above. It's the same for all of us. Essentially we must act as journalists and editors. Somewhere there is a Misplaced Pages article that tells us to write as if we are writing a news story. In newspapers they (are supposed to) make a distinction between reportage (the facts) and editorial (opinion). We are supposed to be doing reportage. I sympathise with you even while I deeply disagree about that sentence: That particular sentence says something '''true''' about AC to '''you'''. Find somebody authoritative to agree with you! (Newspapers do this too but if they are caught out it is called bias!) I would like to try for a not too long, snappy article that says all the main points. ] 21:33, 7 May 2004 (UTC)

:It's not so big problem to consider all opinions here, by far not every person has different opinion, and unfortunately Misplaced Pages can only include the views of these people who edit it, this is why everything is done to increase the number of editors, even to allow to edit anonymously. The article what recommends to write in news style is ]. But this is only a recommendation, not a compulsory policy. In ] it is recommended to write event articles in news style, but AC article is not event article. It is not obligatory to write such articles as reportage. This should be good for many articles, especially these what should report the facts or events, but articles as AC article should rather give all knowledge there is about the topic, and give idea to reader how different people may look at these issues. "Ideally, presenting all points of view also gives a great deal of background" (]), this is not the same as reporting the facts, and is especially important for such controversial topic as AC. Also, as there is not so huge amount of information about AC, acting as reporters would not provide enough reported events to form any complete representation of the knowledge there is. So this reporting style is a matter of opinion, not obligatory rule. There is also not so strict policy concering bias in some particular views "But experienced academics, polemical writers, and rhetoricians are well-attuned to bias, both their own and others', so that they can usually spot a description of a debate that tends to favor one side" (]). Misplaced Pages should be unbiased by presenting all views. Misplaced Pages should present all views, and all possible interpretations, and it is allowed to interpret the sources, not demanded that every sentence we write must come from some authoritative person. ] is rather "representing disputes, characterizing them, rather than engaging in them", not deleting one disputed interpretation because no authoritative person didn't say exactly that. A lot of how you and others interpret the views are also not said exactly so by an authoritative person. ] 23:03, 7 May 2004 (UTC)

::The opinion "unfortunately Misplaced Pages can only include the views of these people who edit it" is wrong. Of course we can represent the views of others. When acting as your copy-editor I do it all the time. :-) ] 00:17, 8 May 2004 (UTC)

:::To correct, I meant only the views put there by the people who edit it, these may be their views, or the views of the others. ] 00:37, 8 May 2004 (UTC)

By news reportage I did not mean (and, I think, Misplaced Pages does not mean) news articles as such (dated day by day) but rather the type of news feature article that you might read should a clued-up journalist write it. Imagine if there were an Artificial Consciousness article in New Scientist. What would that look like? What would we want in it? Do you think we could do that here? ] 23:20, 7 May 2004 (UTC)

:Misplaced Pages is not New Scientist, New Scientist only represents the most established views, there is a rigorous peer review before anything is published there. Articles about AC have not much chance to get there, as there are not much peers in the field, and even almost no peer-reviewed articles. Misplaced Pages aims to include all views, not only the most established, as I mentioned above, even the pseudoscientific views can be included, when it is mentioned that these are minority views. It's not desirable though to include something what is obviously wrong (or, say, having a lot of negative peer review what confirms that). In ] is also said that "you don't have to get all of your information on entries from peer-reviewed journals", what I'm not sure that is allowed in New Scientist, or your article would not get a positive peer review then. So by all that, Misplaced Pages is very different from New Scientist. In ] is also stated that Misplaced Pages should not adopt a "scientific point of view" instead of "neutral point of view", so Misplaced Pages is clearly not such scientific publication as New Scientist, by ] it is a general encyclopedia, a "representation of human knowledge", not a publication for widely accepted scientific research (peer review etc). And one more thing. New scientist is a very good journal, containing only research, correctness of what is thoroughly checked. But peer review may take a year or two. Can you imagine how many years (or centuries) it did take for example to develop Linux operating system, if nothing could be used before it was published, say, in New Scientist. ] 23:53, 7 May 2004 (UTC)

::This is an encyclopaedia, not an operating system. AC is a scientific subject - the appropriate place to see an exposition on AC is New Scientist or Popular Psychology. But, I agree, some views that either journal would ignore we should include. BUT it seems to me that you yourself prefer the scientific approach, no? ] 00:11, 8 May 2004 (UTC)

:::Linux was built based on knowledge as well, and knowledge is what Misplaced Pages should represent. I certainly prefer ], but not peer-reviewed style rigor for just everything, I think that every knowledge in science should be available for users of this knowledge to decide. And AC, some want rather to put it under philosophy than science. I want it to be science, and more precise science than psychology, under what they once created this article. ] 00:32, 8 May 2004 (UTC)

You are right, lots of my points and paragraphs need the same rigorous treatment I am giving yours. It just so happens that I started at the top. Someone(!) put more of your paragraphs at the top than they put mine. The idea was to do every paragraph but it is very hard work and I fear I will lose enthusiasm at this rate. ] 23:20, 7 May 2004 (UTC)

==== attentiveness ====

''Another test of AC, in the opinion of some, should include a demonstration that machine is capable to learn the ability to filter out certain ] in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The ]s that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. ''

I am looking for references to support the above. Where something is a truism or plainly logically follows references are obviously not required. But we have to be careful to include relevant material only. Having said that I like ''attentiveness'' as a desirable attribute of AC - at least it can be tested! ] 12:28, 6 May 2004 (UTC)

I have nothing against the text above and I don't think it's POV, but it's difficult to back such explanations with references, because every paper is often concerned with a single aspect. Also everything is often very interconnected, like awareness, attention, imagination and prediction. This is from the point of view of conceptual spaces, not perception as the text above. By Antonio Chella from University of Palermo "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attention mechanism implemented by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted." ] 11:47, 12 May 2004 (UTC)

==== learning ====

The above example includes "learning". It seems to me that the ability to "learn" is not '''necessary''' for consciousness. What scholar says otherwise? ] 12:34, 6 May 2004 (UTC)

:Aleksander: '''''Corollary 4: Perceptual Learning and Memory''''' states:
:''"Perception is a process of the input'' sensory ''neurons causing selected perceptual'' inner ''neurons to fire and others not. This firing pattern on inner neurons is the inner representation of the percept - that which is felt by the conscious organism. Learning is a process of adapting not only to the firing of the input neurons, but also to the firing patterns of the other perceptual inner neurons. Generalisation in the neurons (i.e. responding to patterns similar to the learnt ones) leads to representations of world states being self-sustained in the inner neurons and capable of being triggered by inputs similar to those learned originally."''
:] 18:06, 6 May 2004 (UTC)

All connectionist systems at least are learning systems, so many scholars say otherwise. ] 12:44, 6 May 2004 (UTC)

That is not a (logically) valid argument. You would first have to show that a "connectionist system" is necessary or desirable for AC. Then you would have to show that they learn. First you might have to define "learn". No! We are not here to reason it out for ourselves. If I could do that I would be collecting a ] in Stockholm. The ] is: Cite the scholar(s). Give references. ] 12:53, 6 May 2004 (UTC)

For example Lloyd considers that connectionist system is necessary for consciousness, he talks about it in the paper http://www.consciousentities.com what was linked to the article. I don't like connectionist view, except that in some sense AC system should be similar to neural network, like learning and connections. ] 13:55, 6 May 2004 (UTC)

You say the link supports the view that it is necessary to have a connectionist system for AC. Not that a '''necessary''' attribute of AC is an ability to learn. ] 15:10, 6 May 2004 (UTC)

All neural networks are learning systems, trainable systems, and connectionists like Lloyd consider that these are necessary for AC. ] 15:17, 6 May 2004 (UTC)

Where does he say this and can you provide a quote? He needs to say either something like "learning is a necessary attribute of consciousness" OR "all connectionist systems are capable of learning and connectionist systems are necessary attr of conc". ] 15:52, 6 May 2004 (UTC) ] 15:34, 6 May 2004 (UTC)

The only AI systems what are not learning as much as I know are cellular automata, and it's not sure that they cannot learn either. Or do you know some other example? (By my 1913 public domain Webster, what is in my computer now, one meaning of "learning" is "To gain knowledge or information of".) ] 17:41, 6 May 2004 (UTC)

Find some respected scholar who says this. ] 23:10, 6 May 2004 (UTC)

OK, so '''ability to learn''' is not a necessary attribute of AC? ] 19:19, 7 May 2004 (UTC)

Engineering consciousness, a summary by Ron Chrisley, University of Sussex consiousness is/involves self, transparency, '''learning''' (of dynamics), planning, heterophenomenology, split of attentional signal, action selection, attention and timing management. ] 12:22, 12 May 2004 (UTC)

Daniel Dennett, ''"It might be vastly easier to make an initially unconscious or nonconscious "infant" robot and let it "grow up" into consciousness, more or less the way we all do."''

''"Cog will not be an adult at first, in spite of its adult size. It is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world."''

''"Nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access to literally millions if not billions of logically independent items of world knowledge. Either these must be hand-coded individually by human programmers--a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas--or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world."'' ] 23:56, 13 May 2004 (UTC)

An interesting article about learning is by Axel Cleeremans, University of Brussels and Luis Jiménez, University of Santiago, where learning is defined as ''“a set of philogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments”.'' ] 11:36, 17 May 2004 (UTC)

== AC article should be deleted ==

I think the AC article on the whole should be deleted. --] 02:26, 5 May 2004 (UTC)

:But that is just my opinion. It makes the AC talk page interesting, though. ] 04:10, 5 May 2004 (UTC)

== By definition ? ==
''Simulated consciousness cannot be real consciousness, by definition.'' says the article. However this may not be true. Consider the following cases: A simulated aeroplane and a simulated author. A simulated aeroplane can simulate the making of a simulated flight. A simulated author can simulate the writing of a simulated story. However note that, although it is easy to tell the difference between a simulated flight and a real flight, it may not be nearly so easy to to tell the difference between a simulated story and a real story. In fact a good enough simulated author will be able to write simulated stories which pass all the tests of real stories. In principle there is no difference between a simulated story and a real story whereas there is an inherent difference between a simulated flight and a real flight. Since it now appears that there are at least two classes of concepts (those in which the distinction between simulated and real examples of the concept is meaningful and those in which it isn't), the question is "Which class does consciousness belong to ?". Mere appeal to the definition of "simulated" is not enough. Perhaps a conscious being, is like a story rather than a flight. -- ] 20:03, 6 May 2004 (UTC)

:I don't think there is any such thing as a simulated story. A story is just a story. Therefore Derek's question goes to the heart of the issue. There is only one example of consciousness that each of us can draw on: our own. Anything else is theoretical. The assessment of another entity's consciousness is therefore necessarily subjective; there is no exterior (objective) model against which to judge it. The only yardstick for assessment of an implementation of artificial consciousness must therefore be (analogous to the method of the Turing test) whether a set of people judge that implementation to be effective. Artificial consciousness and simulated consciousness are not synonymous. Indeed the idea of simulated consciousness doesn't make sense, in the same way that the idea of a simulated story doesn't make sense. So, yes, indeed, a conscious being is like a story rather than a flight. ] 11:31, 7 May 2004 (UTC)

::Simulated story was considered to be a story generated by a simulated author, for example (Java must be installed to run that). ] 13:20, 7 May 2004 (UTC)

::Interesting. That seems to be going beyond the standard definition of simulated. It seems to imply that if I were to ask a person with real consciousness for a list of web pages featuring the phrase "artificial consciousness", I would receive a real answer in the form of a list of URLs but if I were to ask a machine with simulated consciousness for the same thing, I would receive a simulated answer in the form of a list of URLs. The two lists might well look identical but apparently one would be real whereas the other would be merely simulated. Is that what you mean to say ? -- ] 14:41, 7 May 2004 (UTC)

::: Derek wrote (above): "A simulated aeroplane can simulate the making of a simulated flight. A simulated author can simulate the writing of a simulated story." I would have written: "A flight simulator simulates the making of a real flight. (There is no point in it simulating a simulated flight, unless it is a test simulator intended to show whether the actual flight simulator works, although even this is unlikely.) A simulated author can write a real story. (I do not understand the idea of a simulated story. Like Derek says, a list of URLs is a list or URLs regardless of who or what produced it.)" But although there might appear to be semantic identity between the result of a flight simulator and the result of a simulated author, there isn't really. Although a flight simulator simulates a real flight, its result is a simulated flight. But a simulated author produces real story, come what may. So I don't see where we're going beyond the definition of ''simulated''. In an attempt to substitute consciousness in the argument, we get: "A consciousness simulator either produces real consciousness or simulated consciousness." I'd suggest that there's no way to tell whether the result of the consciousness simulator is simulated consciousness or real consciousness - the manifestation of consciousness is hence more akin to the list of URLs than to a simulated flight. That might seem counter-intuitive, but is there a counter-argument? ] 16:59, 7 May 2004 (UTC)

Cannot a real author write a simulated story? I reckon a simulated author could write a real story. On the other hand: A real printing press cannot produce a simulated book. And a simulated printing press cannot produce a real book. Weird. What this shows is, I think, that simulated may have more than one meaning. ] 15:47, 7 May 2004 (UTC) Which, Derek, was what you were saying? I'll have another read! ] 16:02, 7 May 2004 (UTC)

::What I was trying to say, Paul, is that although English will allow us to discuss simulated stories, brave potatoes, or waterproof fluency, that doesn't mean that these words refer to concepts that are real, useful or meaningful. They may be; they may not be. Referring to the definition of ''simulated'' will not tell us whether simulated consciousness is real or not since some simulated things are real and some are not. So I think that the words ''by definition'' are inappropriate. They seem to give unwarranted authority to a statement which may be true or untrue but is definitely controversial. In my opinion it's probably untrue but I'm happy to admit that I don't really know and that I don't believe that anyone else does (although they too may have an opinion). -- ] 20:53, 8 May 2004 (UTC)

::I can tell whether a flight is real or simulated by looking out of the window and checking whether I see pixels or the blue and beyond. But how would I discriminate between a real and so-called simulated story? I suggest there is no way of doing this. Therefore there is no such thing as a simulated story and, by inference, there is no such thing as simulated consciousness. That's not to say that there can't be a consciousness simulator (although that is perhaps a misleading name for whatever it might be) just as there might be a simulated author (as Tkorrovi suggests above) who lives somewhere in Java. ] 16:59, 7 May 2004 (UTC)

:::Very true. But then again if the definition of an author is ''one who writes stories'', is the author really ''simulated'' just because it is ''artificial'' ? -- ] 01:59, 9 May 2004 (UTC)

"There is no such thing as simulated consciousness" is ambiguous. My first reaction was to say yes, that is what I think. In that consciousness is consciousness, simulated, synthetic, artificial or natural. But you could mean that genuine AC is impossible. ] 19:08, 7 May 2004 (UTC)

Derek, do you think the point is irretrievably lost and the sentence needs to be removed or do you have an alternative form of words that might preserve the obvious (only to me?) intent? ] 04:27, 9 May 2004 (UTC)

If the sentence "Simulated consciousness can not be real consciousness, by definition." were to be replaced by the sentence "Simulated consciousness may not always be real consciousness." and the "Yet" replaced by "But", I think that the paragraph would be nearer the truth.

It's as difficult for us to write sensibly on the science of consciousness as it would have been for Victorians to write sensibly on the science of flight and for much the same reason. So if we are going to write on the subject at all, we need to be very careful to describe areas of ignorance in a manner which makes clear the level of ignorance involved. For an interesting parallel to our current article, read Encyclopedia Britannica's 1911 article on the Sun, a fascinating mixture of fact and speculation. -- ] 16:16, 9 May 2004 (UTC)

::One of the things we have been doing here for a while is to discuss semantic differences between different terms. In terms of citations, as indicated above there doesn't seem to be mucch on the internet on 'artificial consciousness' per se, though there is plenty on various aspects of consciousness within an AI context. Therefore the adjective used is not of great significance. Because it is not an established academic discipline in its own right, whereas ] is, we find that references to machine consciousness use a number of different terms which mean the same thing. Artificial consciousness is just one of those terms, perhaps claiming closest affinity with artificial intelligence by use of the word ''artificial''. Therefore the ] article should be the most rigorously scientific of all the alternatives, and there could happily be a ] page. But let's keep the main page free of anything without citation. ] 00:10, 10 May 2004 (UTC)

==Copy-editing problems==

Tkorrovi, please explain what is meant by this. I do not understand it.

:This view assumes that anything that cannot be modelled by AC must be in contradiction with physicalism, but in his "What is it like to be a bat" argues that subjective experience cannot be reduced because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism.

] 12:52, 7 May 2004 (UTC)

:This can perfectly be understood, maybe one comma should be added:

::"This view assumes that anything whatt cannot be modelled by AC must be in contradiction with physicalism, but in his "What is it like to be a bat" argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism."

:] 13:27, 7 May 2004 (UTC)

It's "that" or "which", not "what". I do not understand "This view assumes that anything what cannot be modelled by AC must be in contradiction with physicalism". Firstly, what is "this view". Many are expressed. Which one are you referring to? ] 13:53, 7 May 2004 (UTC)

:"This" refers to something what was mentioned before. If we say "this section" then it means the section where the sentence appears. As the section described a view, and no other view was mentioned, then it means a view discussed in that section. ] 14:21, 7 May 2004 (UTC)

Are you saying that the clarity of that sentence has not been improved? Now that it has I will leave you to that view. What view, your view on consciousness? ] 14:30, 7 May 2004 (UTC)

:No I don't, I just answered to your question, "Genuine AC view" reminds what view is discussed. ] 14:50, 7 May 2004 (UTC)

::Point proven! You make it too easy for me! It wasn't "that view" to which I referred. ] 14:54, 7 May 2004 (UTC)

:::My reply was not to the second half of your question. You should clarify the second question you asked. ] 15:00, 7 May 2004 (UTC)

::::When in a hole, stop digging. ] 15:50, 7 May 2004 (UTC)

==Under the Eye of the Clock, A Brief History of Time==

Matthew Stannard, an expert of the English language, who constantly criticizes my use of English, interprets in his latest edit "capable human" as a "human deemed to have the capabilities of humanity". In what dictionary did you find such definition? The word "capable" has much more exact meaning, what is widely known to every educated person, in 1913 public domain Webster the first definition is "Possessing legal power or capacity; as, a man capable of making a contract, or a will". The capable person must have enough mental powers and ability to think to make a contract or will, what this means is legally very well determined. Old people who are going to make a will are sometimes asked questions like who is the prime minister of his/her country, to find out whether he/she is mentally able, and carefully checked that every sentence is what he/she really wants. The other definition is "Possessing adequate power; qualified; able; fully competent; as, a capable instructor; a capable judge; a mind capable of nice investigations", but these are mostly the meanings for specific cases (like instructor or judge, not ordinary human). But not "deemed to have the capabilities of humanity". ] 17:10, 10 May 2004 (UTC)

Let me, just for the sake of argument, accept that Matthew may have made a mistake. That does not excuse the mistakes of any others. Nor does it invalidate valid criticisms he makes of others. In other words, Tkorrovi, you cannot use someone else's imperfections as an excuse for yours. In my opinion your own supplied definitions scuttle your argument (i.e. that AC must have all the abilities of a capable human) just as well as any of Matthew's torpedos. Imperfectly yours, etc. ] 17:37, 10 May 2004 (UTC)

No, I admit I may make mistakes, as well as you may, and there is nothing wrong in correcting the mistakes of the others. I only want to say that this doesn't justify taking yourself very high and belittle others. Examples from Matthew Stannard: "Ability to learn is, according to some experts, something that can be lost in certain people. The question is whether someone who has lost this ability should nevertheless be deemed conscious. I pick as an illustration someone who has had pointed out to them on numerous occasions that they make an elementary mistake in their written grammar but who nevertheless carries on making the same mistake.", "There is benefit in using a dictionary (any dictionary, but a learner's dictionary in particular) to discriminate between the usage of what and that. One of the benefits of humanity is that people (or at least some people) are able to learn languages. Some people, unfortunately, never master this art." (as a reply on this page to my suggestion to use public domain dictionaries advised in Wiktionary). The other example of hypocriticism by Matthew Stannard was that he recently accused me in public place in making proprietorial claims on the article, what I never did. And you even confirmed that you are not going to have even a slightest respect towards me. I said here in NPOV section in trying to reconcile "It could been a very good discussion if here was enough respect to each other, just an elementary respect to other's humble personality. Tkorrovi 01:15, 4 May 2004 (UTC)", you replied after some talk "And he's paranoid. Paul Beardsell 14:25, 4 May 2004 (UTC)", I said "As you see, he never stops, and has not a slightest wish to agree with me, or even respect me. Tkorrovi 14:36, 4 May 2004 (UTC)" and you replied "That is correct. Tkorrovi is worthless troll. Paul Beardsell 14:42, 4 May 2004 (UTC)" Do you call it criticism? This is attacking another person, and in a way also the article and other people who may want to talk here. Stop acting like that. I know that such behaviour iss tolerated by several people in Misplaced Pages, but this also doesn't justify anything. Based on everything above I have a justified suspicion that you thought that this article is redicilous and came with Matthew Stannard here to make jokes on the article and on me who started the article. Not everybody thinks that this article is redicilous, and if there is something wrong, this is not a way to improve that. ] 18:48, 10 May 2004 (UTC)

:That's the paranoia I mentioned earluier. ] 07:35, 11 May 2004 (UTC)

::I just wanted to find a reasonable explanation. It's humane that something what we don't understand may seem redicilous to us. Smarter people just usually think more. If they find a reason why something is wrong, then they say that, and if they don't find it, and find that they are not at the moment competent to criticize, then they choose to ignore instead of laughing at the people involved and thinking that they would achieve something by that. This can be solved by thinking more, but if the reason of ridiculing others is paranoia, then it's sad. You said me once (in the Archive 8) "I want to use the term artificial consciousness in the same way I might one day have to use natural consciousness to distinguish it from the artificial variety and as a separate subset of consciousness. You must not be allowed to impose some other meaning on the term than what it literally does now mean." Then somene in the Village Pump said to me that what you wanted to say was "I don't want you to". I started to think that maybe you indeed just act like a child who feels hurt when something is not as it likes, and then starts to attack others as a protest. As you may notice, most of the people who have been here don't support your personal attacks. ] 12:35, 11 May 2004 (UTC)

Now my criticism again, criticism only, for having a reply to that. Matthew Stannard interprets in his latest edit "capable human" as a "human deemed to have the capabilities of humanity". In what dictionary did you find such definition? The word "capable" has much more exact meaning, what is widely known to every educated person, in 1913 public domain Webster the first definition is "Possessing legal power or capacity; as, a man capable of making a contract, or a will". The capable person must have enough mental powers and ability to think to make a contract or will, what this means is legally very well determined. Old people who are going to make a will are sometimes asked questions like who is the prime minister of his/her country, to find out whether he/she is mentally able, and carefully checked that every sentence is what he/she really wants. The other definition is "Possessing adequate power; qualified; able; fully competent; as, a capable instructor; a capable judge; a mind capable of nice investigations", but these are mostly the meanings for specific cases (like instructor or judge, not ordinary human). But not "deemed to have the capabilities of humanity". ] 19:01, 10 May 2004 (UTC)

:The sentence you keep on inserting into the article therefore means that any AC must have the capability to enter into legal contracts as that is an ability of a capable human. Your sentence is not backed up with a quote from any scholar or a citation of any article, nor do you demonstrate that it follows logically from any such reference. And it isn't even common sense. ] 07:56, 11 May 2004 (UTC)

::"Capable" was just meant to mean a level of development. Maybe we can also say "mentally able". There were many attempts to develop AC what should exhibit human behaviour, not just some behaviour what seems to be conscious for some, like "The system must be able to acquire arbitrary new knowledge and cognitive skills from a human instructor and must understand the acquired knowledge. It must exhibit human-like psychological states, in particular, motivated voluntary behavior and emotional states such as appreciation of a joke." This also shows that learning is deemed necessary for AC. This is not the best paper, just one example. ] 17:53, 11 May 2004 (UTC)

::Yes, I think it is vitally important that we restrict discussion of artificial consciousness to instances that are cabable of being judged against the ''capable human'' (the usual legal term is actually 'capable person', or perhaps Tkorrovi just means '']''). This accords with the overall idiosyncratic nature of this article. After all we wouldn't want to consider an ''illegal'' implementation: an instance of artificial consciousness that was incompetent, that perhaps artificially authored graffiti and sprayed it on railway carriages, that simulated a sociopath and killed anyone who came within range, and so on. So let us restrict the discussion to instances that are cabaple of demonstrating integrity and that are eventually so trustworthy that we can hand over world leadership to them. What a noble aim! ] 08:45, 11 May 2004 (UTC)

:::We also talked about it earlier. The aim of artificial consciousness cannot be creating an "artificial idiot" what should not necessarily have any mental ability, then we can create just literally nothing and call it artificial consciousness. This would make artificial consciousness a nonsense. Don't know that any scholar ever seriously suggested that. "So let us restrict the discussion to instances that are cabaple of demonstrating integrity..." Not that we should restrict discussion to that, but what is wrong in trying to create AC what demonstrates integrity? "...eventually so trustworthy that we can hand over world leadership to them" This would be a subject of another long discussion similar to "AI taking over the world" what was discussed a lot i the Internet, but recently it seems to me that more and more people who are competent in AI think that this is an absurd idea, often propagated by incompetent people who have no other way to make their ideas interestig. ] 11:39, 11 May 2004 (UTC)

You meant hypercriticism, not hypocriticism. But you are hypersensitive to criticism. You seem to understand English with the same lack of precision you write it. You are unable to explain your reasoning. You stubbornly will not give way. You assert you know how Misplaced Pages works yet you contribute to only this one article. You are a pain in the neck to deal with. When the hand of friendship is extended to you, you bite it. Either that or we just do not like Estonians. ] 19:05, 10 May 2004 (UTC)

::Yes there seems to be confusion between propriety, proprietorial, and proprietary, which must be very difficult. Is ''hypocriticism'' the opposite of ''hypercriticism'', as hypoglycaemic is the opposite of hyperglycaemic? The noun from hypocritical is hypocrisy - not hypocriticism (from below), just one of the idiosyncracies of the English language. But I am still not sure whether Tkorrovi is being sensitive to criticism or accusing me of hypocrisy - both probably, but who cares? A hypocrite says one thing and does another. Is there an equivalent word for someone who says one thing but ''means'' another? Perhaps this should be dubbed the 'incapable human'? ] 08:45, 11 May 2004 (UTC)

:::Proprietorial means "Of or pertaining to ownership; proprietary; as, proprietorial rights" (1913 Webster) and proprietary means "Belonging, or pertaining, to a proprietor; considered as property; owned; as, proprietary medicine" so the proprietorial claim may mean that something was claimed to be somebodie's property (proprietary claim) or that he has other rights of ownership to it. Why do you accuse me in making proprietorial claims on the article when I never did it? Don't you understand that this is a serious accusation? ] 15:35, 11 May 2004 (UTC)

::::It may be that a dictionary gives proprietary as synonymous with proprietorial, but an important point about English is that there are practically no synonyms in the language (according to Fowler, whom I respect). Proprietorial and proprietary have distinct meanings and usages. If they meant the same thing then there wouldn't be the two words in the languiage. To accuse someone of being proprietorial about a wikipedia article is no big deal. We are all proprietorial about the items we have on our watchlists - it's the first thing we look at when we log in - to see who's been messing with '''''my''''' stuff. It's against the spirit in Misplaced Pages, however, where we are exorted via the open licence to forgo the ownership which we naturally feel about our writing. It's intended to be helpful to warn someone who is being unduly proprietorial to watch out about their own ego. Someone who is in dispute about a page, who complains bitterly about others' edits, and persists in preserving their own form of words is being proprietorial, and they don't need to say they are; it's plainly self-evident, and it's not an insult, just a friendly reflection and a warning not to become obsessed. ] 23:55, 11 May 2004 (UTC)

:::::I never said and the dictionary doesn't say that proprietorial and proprietary are synonyms. Trying to preserve some phrase is not a proprietorial claim when it is not a copyrighted quote. Accusing me in making proprietorial claims is accusing me in violating the Misplaced Pages copyright (the terms of the GNU Free Documentation License). This is a serious accusation. I never made any proprietorial claims on the article and never considered to have any copyright of the article or the parts of it what I edited. Take back your accusation. ] 08:01, 12 May 2004 (UTC)

::::::Of course I take back any accusation you feel I might have made and apologise unequivocally for any offence you might have taken about the notion of your proprietoriality over the content of the artificial consciousness page. It is not rational, however, to infer that you may have violated Misplaced Pages copyright, as I was at pains to ensure you understood the distinction between proprietorial and proprietary, and I never suggested anything to do with the latter term. I was making an observation, which I don't think was inchoate, that you would do well to take a step back, so to speak. Check out . ] 08:30, 12 May 2004 (UTC)

:::::::OK, apology accepted. It is not the exact wording what is important, but it is sometimes important that the description is complete, therefore it cannot always be edited only by taking something out of there. It is not so easy to formulate them, and this is because they don't change very rapidly. Therefore it would be more reasonable to add other interpretations and not delete the others. We may add to every interpretation that it is not widely accepted, but different interpretatations help reader to better understand different ways how some concept can be understood. We may back interpretations with different sources but we cannot replace all interpretations with quotes, there may not be exactly such quotes because this article is like an overview, every source may be specialized only to certain aspect. Maybe the best for article on such not so well established topic would be not to delete anything except what is obviously wrong, and include as many views as possible. This is the best what we can do and is not in contradiction with Misplaced Pages rules. If we try to write a scientifically perfect article, then trying to do it without contradiction we would inevitably decline towards scholars with a certain view, and may even go to wrong direction as the research is still very preliminary. We should discuss more about the way how to write such article. ] 11:27, 12 May 2004 (UTC)

==A good article==
Perhaps a problem with this topic is to know how to build a good encyclopedia article. It could become a meandering piece, essentially an unstructured set of notes about whatever any wikipedians happen to pick up from elsewhere. What is needed, I think, is a vision, a focus, and to which end I suggest that consciousness, whilst difficult to define entirely, is nevertheless a singluar phenomenon. Whatever theories there are about how it works there is probably only one which is right. Which? I.e. What is the leading theory, and to answer that, we need to know who are the leading theorists. If the discipline is too immature to answer this question, i.e. development is of such a nature that no one can tell who the leading theorists are, we should at least ask who is making progress in the field. So we might divide the experts up into those who are active today, those who were once active and whose ideas have been superseded, and those whose ideas, whilst old, provide the bedrock from which modern theories have been built. My feeling is that if we can agree about the structure of the AC article, and define that in this talk area, then we will make better progress on the article itself. ] 08:00, 13 May 2004 (UTC)

:This would be a work for several months, in fact a work what nobody exactly did before. To start from something, I collected links from first 200 results of Google search for "artificial consciousness" what are about the topic at , this should give some kind of overview at least on the most active research. As you see, several articles are these what we already saw. There is exactly no complete theory, and different papers are mostly about different aspects of AC. Attention, awareness (of processes), imagination, prediction, learning, perception, association, dynamism, adaptability are possible aspects of AC, and they may not be a separate modules of the AC system, but aspects of the same mechanism. Many of them are mentioned in connection with neural networks, but neural network is in essence a simple mechanism. It's restricted to recognizing images though, so it may need some additional software. There are other mechanisms what are less restricted like cellular automata, but nobody could train them yet. And lastly there is my mechanism, what is very simple, but deemed to be more unrestricted than neural networks and by some theoretical reasons (like Dennett's multiple drafts principle) may in some way have all these aspects. But there is no complete AC theory yet. I tend to think that the right theory may be somewhere near to what I talked about, but the article is not for such conclusions. Then there is top down approach, like create the system and input there all human knowledge, seems to be quite infeasible for some, and bottom up approach many AC projects are based on. Not much software except neural networks and artificial emotions systems. Well, what I think. But more importantly it should be necessary to systematize the articles there are. I try to work on it, but it's not very easy. ] 21:22, 13 May 2004 (UTC)

:I added keywords to AC articles . A lot of articles are a kind of philosophy what we also talked here a lot, unfortunately often quite fruitless concerning how to actually make an AC system. But as there is a lot of such philosophy, then it should be in the article as well. There is no leading theory, but what I like the most is an article by Igor Aleksander and Owen Holland in Guardian Unlimited , this is also the most similar to that how I understand AC, and almost the only theory what really gives an idea what AC is. I think that they would not succeed in building their robot, though I think that at least the given details of the theory are correct. But it most likely would not be able to adapt to any a bit more complex environment because it's not unrestricted enough. I think there are reasons to consider that what concerns the principles of making an AC system, the work of Daniel Dennett and Igor Aleksander is the most essential, then there are many others, whose work add to that. ] 22:59, 14 May 2004 (UTC)

== Artificial Neuroconsciousness: An Update ==

For these who wanted me to explain the ]'s theory, this is a very preliminary (as I do everything too quickly) description of . The neural networks he used as an example are a very primitive preliminary models of AC, but based on these models he derived a quite complete, and I think more or less correct, basic theory of AC, what may also be a basis of AC implementations by mechanisms other than neural networks. The examples may be implemented by a freeware program "Machine Consciousness Toolbox", but unfortunately the download site is off, as well as there is no other AC software (except artificial emotions software) for download, I wonder if my program is the only one.

''"Here the theory is developed by defining that which would have to be synthesized were consciousness to be found in an engineered artefact. This is given the name "artificial consciousness" to indicate that the theory is objective and while it applies to manufactured devices it also stimulates a discussion of the relevance of such a theory to the consciousness of living organisms."'' Igor Aleksander says that the theoretical framework of the theory ''"has been inspired by Kelly's theory of "personal constructs" which explains the causes of personality differences in human beings."''

He defines a perceptual mode ''"which is active during perception - when sensory neurons are active"'' and mental mode ''"which is active even when sensory neurons are inactive"''. In his model the inputs of both modes are added, and mental mode is modelled as a feedback loop from neural network's output.

The neural network what he used as an example has an inner state, and it can be trained (e.g. by reinforcement signal) to go from certain state qw to certain other state qx when input is ix. This means that after training it goes from qw to qx also when input is similar to ix (the main reason why neural networks are useful). By set theory such learning is described as qx = §( ix, qw ). If we then continue to give a reinforcement signal, but provide no input, then it goes to mental mode, and input comes from output, such way we can teach it to stay in the state qx. As a result of such training, the only "learned" state will be qx, and it only goes to that state from the state qw. It likely cannot be in any other state when it is not in training mode. If, after such training, we put it into state qw, then it stays in the state qw and changes its output, and in time the output would be similar to ix, it goes to the state qx, and would stay there (also Owen Holland proposed a "Recurrent Neural Machine" what does it faster). This is a primitive example of prediction that the state qw is followed by the state qx, a primitive learned model of the environment.

''"Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness."''

He argues that awareness of self follows from prediction, because by his model the prediction requires a feedback loop. But if prediction is done in accordance with Dennett's multiple drafts principle, then it also requires information from other processes, to find out whether a process fits into its environment, what sometimes may also include feedback loops.

He says that spatial association is necessary for representation of meaning. He also named language learning, will, instict and emotion as aspects of AC. ''"Language is a result of the growth process of a societal repository from which it can be learned by a conscious organism, given the availability of knowledgeable 'instructors'"''.

As an answer to Penrose argument he says that ''"the main aim of the theory is to show that the complex mixture of properties normally attributed to a conscious organism are the properties associated with some computing structures and may be described through appropriate formalisms"'' and ''"while it is possible to agree that consciousness cannot be captured by a programmer's recipe (algorithm), the door should at least be kept open for computational models of consciousness based on systems that are capable of building up their own processing structures"''. Concerning Dennett's multiple drafts principle he says that ''"While the Cartesian ghost in the machine has been expunged, the ghost of the programmer is still there, and this does little to explain how the machine components come into being and do what they do"'', implying that there is no mechanism yet to implement that principle other than in a pre-programmed way (my program provides a proposed mechanism and passed one, though a very simple test). ''"Nagel's suggestion that it is necessary to say what it is like to be a particular conscious organism , can, in ACT be expressed in terms of a taxonomy of state structures (i.e. how does the state structure of a bat differ from that of a human?)"''.

] 21:51, 18 May 2004 (UTC)

Revision as of 14:40, 1 December 2004

NEUTRALITY!

SUGGESTION IMPLEMENTED. The words ARTIFICIAL CONSCIOUSNESS were changed for STRONG AI in MS Word and the article pasted back. It reads fine as a description of Strong AI.

suggestion:

1. Put a revised version of the previous version at 30/11/04 as the article under this heading with clear links to a Strong AI article. It will also be possible to provide a clear discussion of simulation versus real AC and how simulation means something different for a radical behaviourist to what it means for a dualist. Discussion of Zombies etc can be included to expand this AC article into a broad coverage of the field.

2. Create a Strong AI article by removing the more general discussion in the current AC article (perhaps writing these back to the AC article). That the Artificial Intelligence article is also amended to give clear links to the new Strong AI article.


Reasons for this suggestion:

To me this article reads like a naive realist article about strong AI. There is currently no article devoted to strong AI and I strongly recommend this entire text is shifted to a new heading. The current text says:

"This functionalist view, that the human being is truly a real machine, prompts us to ask what type of machine the brain is. That the brain is a machine of the Turing type is assumed because no more powerful computing paradigm has been discovered and all that is known about the brain (admittedly not very much), in the mainstream view, does nothing to contradict the supposition."

This shows that the article is clearly about strong AI and not artificial consciousness per se.

As an article on artificial consciousness it is far too partisan. It introduces none of the problems of the philosophy of consciousness and fails to properly consider the viewpoints of workers outside computer science.

It is unsuitable as an encyclopedia article on AC. It must be moved and adapted to Strong AI

I suggest that the reversion of 1/12/04 is replaced by the previous version and a new Strong AI heading is created for this article. I did not immediately revert it because this deserves some discussion.

Some points that show the partisan nature of the article:

1.It uses sentience as interchangeable with consciousness when the two terms are not interchangeable.

2. It uses a dictionary definition of consciousness: "Possessing knowledge, whether by internal, conscious experience or by external observation; cognizant; aware; sensible" when there is a perfectly good Misplaced Pages entry that considers the ramifications of the subject.

There is a need, as you suggest, in order to avoid the naive realist fantasy, to have some reality checkpoints. Surely a discussion of artificial consciousness should be grounded on a primary definition of consciousness itself and not on wikipedia's own 'derived' definition, no matter how good the latter might be. Matt Stan 10:30, 1 Dec 2004 (UTC)

The use of the limited dictionary definition allows a partisan view without directing readers to where the issues might be discussed.

3. It fails to provide an overview of the philosophy of consciousness.

The focus of previous discussions revolved more around the idea of the artifice in artificial consciousness, since, as you point out, consciousness per se is covered elsewhere. Another term, synthetic consciousness, was coined to get over the philosophical problem that artificial consciousness is a tautology: if observers are aware of the artifice then they will never deem artificial consciousness to be real, and if it's not real consciousness then it isn't consciousness at all! Conversely, if observers are unaware of the artifice and deem the entity to be real then it is no longer artificial, but real consciousness, regardless of how it was contrived. But also see Philosophical zombie and zimboe Matt Stan 10:30, 1 Dec 2004 (UTC)

This shows that I am not the first person to spot that this article is partisan. Artificial consciousness simply means an entity created by artifice that is conscious. What seems to be happening here is that some proponents of strong AI who believe that this would generate real consciousness are occupying a Misplaced Pages heading to ram their point home. A discussion of the wider philosophical issues would demonstrate that strong AI is only one of several approaches to AC.

Or it could mean consciousness that was perceived to be artificial. Otherwise why not use the term synthetic consciousness or simulated consciousness. An artificial sweetener is so called because it is ersatz sugar, and detectable as such. If term artificial consciousness really only refers to the conscious aspects of entities with artificial intelligence, then this connection should be made clear. This is merely a semantic point, but it is intended to isolate the definition of consciousness and if it can ever be said to be artificial in itself. An artificially conscious entity simply means an entity created by artifice that is conscious. That is not the same as artificial consciousness. Matt Stan 11:12, 1 Dec 2004 (UTC)

Artificial sugar would be real sugar made by artifice. Artificial sweetners may not be sugar. Artificial consciousness is real consciousness made by artifice.

Your mention of philosophical zombies shows how important it is that Artificial consciousness should be kept as an overview of the field and not subverted by strong AI.

You are right! Matt Stan 11:15, 1 Dec 2004 (UTC)

4. It mentions behavioural psychology but gives no attention to cognitivism and indirect realism. It fails to distinguish adequately between the SIMULATION of consciousness and consciousness.

5. It fails to note that the simulation of consciousness is the same as consciousness to behaviourists and direct realists.

6. It gives nowhere near enough attention to the indirect realist physicalist yet indirect realist non-dualist arguments or why such arguments exist.

7. It does not mention the suggestion of many authors from Searle to Penrose that artificial consciousness may require physical phenomena that are not part of classical information processing.