This is an old revision of this page, as edited by Tkorrovi (talk | contribs) at 18:02, 21 March 2009 (→Schools of thought). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 18:02, 21 March 2009 by Tkorrovi (talk | contribs) (→Schools of thought)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)Philosophy: Science / Mind / Contemporary Unassessed | ||||||||||||||||||||||||||||
|
Robotics Start‑class | ||||||||||
|
Archives |
1, 2, 3, (Blasphemy), (Summary), 5, 6, (NPOV), 7, 8, (AI vs AC), 9, 10, 11, 12, |NPOV, notreal, real |
Objective less Genuine AC
- By "less Genuine" we mean not as real as "Genuine" but more real than "Not-genuine". It is alternative view to "Genuine AC", by that view AC is less genuine only because of the requirement that AC study must be as objective as the scientific method demands, but by Thomas Nagel consciousness includes subjective experience that cannot be objectively observed. It does not intend to restrict AC in any other way.
- An AC system that appears conscious must be theoretically capable of achieving all known objectively observable abilities of consciousness possessed by a capable human, even if it does not need to have all of them at any particular moment. Therefore AC is objective and always remains artificial and is only as close to consciousness as we objectively understand about the subject. Because of the demand to be capable of achieving all these abilities, computers that appear conscious are a form of AC that may considered to be strong artificial intelligence, but this also depends on how strong AI is defined.
To start with i'd say this needs to be way clearer re what the actual point/position is and who holds it, and to have the sub-issues disentangled (eg, scientific observation of C vs whether it's there, objective vs artificial, AC vs strong AI, etc). And the first sentence of the second paragraph is either part of the "less Genuine" position that needs to be explained, sourced, and related to the position; or it's a general statement re AC which is thus either original research or mistaken re many peoples views. It implies that there couldn't be a dog-mentality AC because it wouldn't be "capable of achieving all known objectively observable abilities of consciousness possessed by a capable human", and that there couldn't be aliens very different from humans (eg, not capable of pain, etc) but nonetheless conscious. One could hold this but many people do not. Much more to say but that should be enough; thx again and hope that helps, "alyosha" (talk) 06:40, 3 January 2006 (UTC)
Suggest:
Artificial Consciousness must not be as genuine as Strong AI, it must be as objective as the scientific method demands and capable of achieving known objectively observable abilities of consciousness, except subjective experience, which by Thomas Nagel cannot be objectively observed.
The point is to differentiate AC from Strong AI, which by some approach means just copying the content of the brain, by no paper is this the aim of AC. There are no such terms as "genuine AC", "not genuine AC" etc, these were invented by a person who was banned from editing this article indefinitely by the decision of the Arbitration Committee. Overall the text of the article is too long, I have always said it should be shortened so that the reader could follow it.Tkorrovi 01:54, 7 January 2006 (UTC)
If one really wants to improve this article, please notice that the very first sentence was edited wrong, nowhere is said that the aim of AC is to produce definition of consciousness, the aim of AC is to implement known and objective abilities or aspects of consciousness. The definition by Igor Aleksander said "defining that which would have to be synthesized were consciousness to be found in an engineered artefact", who can follow it, this says something very different than producing a "rigorous definition of consciousness".Tkorrovi 02:48, 7 January 2006 (UTC)
Suggest:
This article interchangeably uses the words intelligence, consciousness, and sentience. In fact, an artificial conscious program would more correctly be described as an artificial >sapience<, as sapience implies complex reasoning on the level of a human. Sentience, by contrast, merely reflects the ability to feel. Almost all multicellular animals have the ability to react to their environment through a nervous system, and so can be to some degree be considered 'feeling' and therefore sentient. It is quite possible that a program might be programmed in such a way as to not incorporate feeling at all. Additionally, the difference between intelligence and consciousness is quite great. An intelligent being might be capable of performing a complex task, reacting to its environment accordingly, without actually using any reasoning, and it is reasoning that defines sapience.
(gigacannon, 01:26 GMT 15 May 06)
We cannot use our own terms here, in spite that there has been a lot of criticism about "original research" in this article, this article is about that which was written in various papers. The term "consciousness" has mostly been used regarding humans, so it is about the kind of awareness which humans have, or anything else which may have the same kind of awareness. Even the bacteria have the ability to react to their environment, the difference is how advanced such awareness is, like the awareness of humans is so advanced that the brain can model every kind of external processes. But then, all these thoughts are for us to understand the things, what we can write in the article is how exactly these things were explained in various papers.Tkorrovi 15:03, 24 May 2006 (UTC)
Personality, personal identity, and behaviourism.
"Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more".
Does this make sense to anybody? Aspects of personality, such as extroversion/introversion are well-defined, objectively measurable and fairly fixed. They are even describable as behavioural dispositions. Since when was something an illusion because it need not necessarily have evolved? Is the human appendix an illusion? It is hard to see how even a behaviourist could dismiss them as illusory. Does the article really mean "personal identity"? (cf Dennett's "center of narrative gravity").1Z 17:41, 22 January 2007 (UTC)
Illusion is not the correct word, although the following statements certainly make sense. What about something like 'a construct of the brain' ? Stringanomaly (talk) 19:55, 2 April 2008 (UTC)
- Illusion is certainly not a correct word. It seems, that someone wanted to widespread the "consciousness is illusion" idea everywhere, which is really a hidden homunculus argument. So some sentences seem to be inserted about these arguments, in improper places. It seems that this person also was an idealist, considering a lot of text about idealist philosophy inserted in the article, which might be a reason why this person didn't like Artificial Consciousness, which aims to explain the things rationally. This paragraph is not in the article already a long time. Tkorrovi (talk) 22:11, 3 April 2008 (UTC)
Here's another one
"The phrase digital sentience is considered a misnomer by some, since sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It suggests conscious experience is a state rather than a process".
What's going on here? Can't digital systems do processing?1Z 18:50, 22 January 2007 (UTC)
This is almost a great article
I've marked this article for cleanup and started in on it.
I think that there is some very solid information in this article, but it's very uneven and confusing at the moment. The strongest section is the description of the work of Franklin, Sun and Halkonen. It seems to me that their understanding of their own field should carry the day. Specifically, these are my criticisms:
- A few paragraphs sound like original research. These should be cut.
- A few paragraphs sound like someone with only a passing familiarity with AI or consciousness simply mulling the subject over. These should be cut.
- Several subjects are discussed two or three places, such as Searle, self-awareness, etc. The article should be reorganized so that each is mentioned only once.
- A few sections are about closely related, but different, subjects. These need to be tied to the main subject and should probably be substantially shortened. The other subjects are (at least):
- artificial general intelligence, or what futurists call Strong AI.
- philosophy of artificial intelligence, (Turing Test, Searle etc)
- Philosophy of consciousness and the hard problem of consciousness
- artificial intelligence in fiction
- The intro needs to put the subject into context -- specifically, it must distinguish artificial consciousness from artificial intelligence and from artificial general intelligence and it must cite sources that make this distinction. It should also specify it's relationship to the philosophy of artificial intelligence.
- The whole article doesn't flow, mostly because the titles of some sections don't capture their content, the lead paragraphs of some sections don't provide any context, and some paragraphs don't seem to be on the same topic as the rest of their sections.
To identify the original research (and other bull----), we need to reference each paragraph. Since harvard references seem to be more common in the stronger sections of the article, we should use harvard referencing throughout. Once the references are in place, then we can begin reorganizing and cutting.
Tonight I'm cleaning up most of the references with {{Harv|...}} and {{Citation|...}} templates and marking a few paragraphs as original research. ---- CharlesGillingham 11:28, 26 September 2007 (UTC)
- I agree with all you have said but I cant help much as I dont understand much.
- In the beginning it should be a definition or a paragraph that it should be clear for non specialist users coming to this page.
- The "THAT WHICH WOULD HAVE TO BE SYNTHESIZED WERE CONSCIOUSNESS TO BE FOUND IN AN ENGINEERED ARTIFACT" is not exactly such thing.
- The phrase "The brain somehow avoids the problem described in the Homunculus fallacy"
- I dont think it should be here as I am pretty sure that ai researchers dont see Homunculus fallacy as a problem.
- Computers do video recognition like human face recognition.
- Using this recognition, computers do some actions like warning that there is a criminal
- This is how brain works - is doing video recognitions and is acting accordingly to obey the human needs.
- Homunculus argument is like that theory in astrology with the universe that is sitting on the back of a turtle :).
- So I think the Homunculus argument is not a scientific thing to link Artificial consciousness to. Raffethefirst (talk) 12:59, 14 March 2008 (UTC)
- The definition "THAT WHICH WOULD HAVE TO BE SYNTHESIZED WERE CONSCIOUSNESS TO BE FOUND IN AN ENGINEERED ARTIFACT" was taken from a peer reviewed paper by Igor Aleksander, which was one of the earlier papers about Artificial Consciousness. So there we have no choice, this is how the field was defined when it was founded. And i would say that this definition is good enough. Of course it would be clearer for an average visitor to explain it a little more, but then again there would be a problem that these explanations would then not be taken from a peer-reviewed paper, and here this would be considered an original research again. So there likely is no better solution, than to have only this definition there, as this definition is adequate, only maybe somewhat difficult to understand for average visitors.
- I agree what concerns the homunculus argument, i also don't think that the homunculus argument is a serious science. But some people think otherwise, so their view should also be somehow represented. Maybe the solution would be to add link to the homunculus fallacy article to "See also", at least i think that this argument is not so important that it should be mentioned in the beginning of the article. I think that we would wait here the opinion of the others, and after some reasonable time would make some edits what concerns the homunculus argument.Tkorrovi (talk) 21:23, 16 March 2008 (UTC)
- As I said above, I think this article should (1) reflect the views of the researchers who actually believe artificial consciousness is important, and (2) have references to their publications in every paragraph. In line with this, I would leave Aleksander's definition (but perhaps try to make it clearer) and cut the second paragraph, because it has no source. The same sort of standard has to be applied to the rest of the article. The biggest problem with this article is that it contains too much original research. ---- CharlesGillingham (talk) 17:20, 17 March 2008 (UTC)
- OK, at least there is now a consensus to remove this sentence:
- "The brain somehow avoids the problem described in the Homunculus fallacy (such as by being the homunculus) and overcomes the problems described below in the next section."
- So i removed it, and added a link to homunculus fallacy to "See also". This sentence was wrongly placed between two other sentences, now this paragraph is how it likely was originally written, and makes much more sense. Also, this sentence seems to be wrong, as homunculus fallacy is about theories which try to explain how the brain works, not about the brain itself. If you or Raffethefirst would disagree with that, feel free to restore that sentence. Also, there is a lot written about dualism in the article, i don't think that dualism is so relevant to science, no matter what someone's personal view may be. Also, the Dennett's multiple drafts principle was meant to be a solution to Ryle's regress, but this is not mentioned in the article.Tkorrovi (talk) 06:01, 22 March 2008 (UTC)
Two Anticipation paragraphs
In the section Consciousness in digital computers, there are two separated paragraphs on the same topic, anticipation. I suggest someone go and delete or merge the paragraphs. 220.233.7.38 11:17, 14 October 2007 (UTC)
- This article is marked for cleanup. I am hoping that some of the original authors will return to help pull this article back together. ---- CharlesGillingham 18:17, 15 October 2007 (UTC)
Removed external links spam : Absolutely Dynamic/Conscious Sytem
I removed the link to "Proposed mechanisms for AC implemented by computer program: Absolutely Dynamic Systems" as it is an unknown work made by a person without any affiliations and without a proper litterature review and without experimental validations. Moreover, this link has been spammed by its author in several internet forums as well as newsgroups. The bottom line is that this work is unknown and poorly written, and has been ignored by the scientific commmunity (nobody ever cited this). Thus it should not be in wikipedia.
- Link restored. This is a link to open source software project, relevant to the article, links to open source software projects should not be removed.Tkorrovi (talk) 06:07, 5 February 2008 (UTC)
- Accusations about spamming have no ground, who claims so, should provide an example.Tkorrovi (talk) 06:47, 5 February 2008 (UTC)
Planning on gutting this article
I marked several sections as unsourced a few years ago. I'm going to toss them soon unless someone wants to save them. ---- CharlesGillingham (talk) 09:14, 6 January 2009 (UTC)
will agents ever say "Please let me out of here!" ;)
A COnscious MAchine Simulator – ACOMAS Salvatore Gerard Micheal, 06/JAN/09
Objects
2 cross verifying senses (simulated) hearing (stereo) seeing (stereo) short-term symbol register (8^3 symbols arranged in 8x8x8 array) rule-base (self verifying and extending) 3D visualization register models of reality (at least 2) morality (unmodifiable and uncircumventable) don’t steal, kill, lie, or harm goal list (modifiable, prioritized) output devices robotic arms (simulated – 2) voice-synthesizer and speaker video display unit local environment (simulated) operator (simulated, controlled by operator)
Purpose
test feasibility of construct for an actual conscious machine discover timing requirements for working prototype discover specifications of objects discover implications/consequences of enhanced intelligence (humans have 7 short-term symbol registers) discover implications/consequences of emotionless consciousness
Specifications – The registers are the core of the device – the (qualified) ‘controllers’ of the system (acting on goal list), the reasoners of the system (identifying rules), but all constrained by morality. The goal list should be instantaneously modifiable. For instance, an operator can request “show me your goal list” .. “delete that item” or “move that item to the top” .. “learn the rules of chess” and the device should comply immediately. Otherwise, the device plays with its environment – learning new rules and proposing new experiments.
The purpose of the cross verifying senses is to reinforce the ‘sense of identity’ established by these, registers, and model of reality. The reason for ‘at least 2’ models is to provide a ‘means-ends’ basis for problem solving – one model to represent local environment ‘as is’ and another for the desired outcome of the top goal. The purpose for arranging the short-term register in a 3D array is to give the capacity for ‘novel thought’ processes (humans have a tendency to think in linear sequential terms). The reason for designing a self verifying and extending rule-base is because that has a tendency to be a data and processing intensive activity – if we designed the primary task of the device to be a ‘rule-base analyzer’, undoubtedly the device would spend the bulk of its time on related tasks (thereby creating a rule-base analyzer device and not a conscious machine). The ‘models of reality’ could be as simple as a list of objects and locations. Or, they could be a virtual reality implemented on a dedicated machine. This applies to the ‘local environment’ as well. For operator convenience, the simulated local environment should be in the form of a virtual reality. So the operator would interact with the device in a virtual world (in the simulated version). In this version, the senses, robot arms, and operator presence – would all be virtual. This should be clarified to the device so that any transition to ‘real reality’ would not have destructive consequences.
My ultimate purpose of creating a conscious machine is not out of ego or self aggrandizement. I simply want to see if it can be done. If it can be done, then create something creative with potential. My mother argues a robot can never ‘procreate’ because they are not ‘flesh and blood’. It can never have insight or other elusive human qualities. I argue that they can ‘procreate’ in their own way and are only limited by their creators. If we can ‘distill’ the essence of consciousness in a construct (like above), if we can implement it on a set of computer hardware and software, if we give that construct the capacity for growth, if that construct has even a minimal creative ability (such as with GA/GP), and critically limit its behavior by morality (such as above), we have created a sentient being (not just an artificial/synthetic consciousness). I focus on morality because if such a device became widespread, undoubtedly they would be abused to perform ‘unsavory’ tasks which would have fatal legal consequences for inventor and producer alike.
In this context, I propose we establish ‘robot rights’ before they are developed in order to provide a framework for dealing with abuses and ‘violations of law’. Now, all this may seem like science fiction to most. But I contend we have focused far too long on what we call ‘AI’ and expert systems. For too long we have blocked real progress in machine intelligence by one of two things: mystifying ‘the human animal’ (by basically saying it can’t be done) – or – staring at an inappropriate paradigm. It’s good to understand linguistics and vision – without that understanding, perhaps we could not implement certain portions of the construct above. But unless we focus on the mechanisms of consciousness, we will never model it, simulate it, or create it artificially.
the essay above can be found on scribd.com by searching on my name: sam micheal &Delta (talk) 17:22, 7 January 2009 (UTC)
- Why do you write it here? First of all this shows your poor understanding of Artificial Consciousness, you should at least clearly specify what aspects of consciousness your system supposed to model, and then find out whether it really does that. May I suggest that you write about your systems in some more proper place for that in the Internet and not here, like, if you would ever be able to create a working system, then you may create a project in SourceForge.Tkorrovi (talk) 13:41, 19 January 2009 (UTC)
Desperately seeking sources
The problem I have with this artice, except for the "reasearch approaches" section, is that there are almost no sources.We need to prove to the reader that they can trust this article. They're not going to trust us if we don't supply the sources for our information. We need mainstream articles (journal articles, major newspaper articles, major magazines, etc.) that prove that what we say is the normal, usual discussion of the subject. We need sources that say "artificial consciousness researcher Ron Sun argues that X". These unreferenced sections sound like someone just mulling things over. The reader needs to know who it is who thinks these things. ---- CharlesGillingham (talk) 14:34, 4 March 2009 (UTC)
We shouldn't be discussing what consciousness is on the page, we should be explaining what Bernard Baars thinks consciousness is.---- CharlesGillingham (talk) 14:37, 4 March 2009 (UTC)
The problem with artificial consciousness is that it is not exactly a very well-formed field yet, thus it is already quite difficult to have an article which would give readers at least some idea about the field. I understand the desire to improve, but the result here has often been some utter nonsense added to the article. I'm glad that we finally got rid of it, and the article is now quite satisfactory.Tkorrovi (talk) 01:07, 5 March 2009 (UTC)
- Now we're getting somewhere! Your recent edits are exactly what this article needs. ---- CharlesGillingham (talk) 18:17, 5 March 2009 (UTC)
Strange why i failed to find a paper titled "Using statistical methods for testing AI" ;) Tkorrovi (talk) 09:50, 6 March 2009 (UTC)
Schools of thought
According to Persian Philosophers of 10th CEN The Conscience, The awareness is not an attribute nor it is out of our needs to comprehend- they are there like sky wind atoms rays of light all that is seen or not seen. These quantities are real and one can experience them independent of any other observer (as claimed by some quantum physicists). Even Avesina expressed it as a "hanging man in space" which knows at all times that he is there.
It is not easy to describe the source of all descriptions and the totality of it rather we should suffice us with skills-functions-calculations etc. and find a better term instead of AI.
Who added this, please reword, explain more clearly, and add sources. I understand that this is an attempt to say something important, but because it is worded poorly, it remains unclear to most of the readers. Tkorrovi (talk) 17:59, 21 March 2009 (UTC)
Categories:- Unassessed Philosophy articles
- Unknown-importance Philosophy articles
- Unassessed philosophy of science articles
- Unknown-importance philosophy of science articles
- Philosophy of science task force articles
- Unassessed philosophy of mind articles
- Unknown-importance philosophy of mind articles
- Philosophy of mind task force articles
- Unassessed Contemporary philosophy articles
- Unknown-importance Contemporary philosophy articles
- Contemporary philosophy task force articles
- Start-Class Robotics articles
- Unknown-importance Robotics articles
- WikiProject Robotics articles