Misplaced Pages

Talk:Artificial intelligence: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:01, 9 October 2014 editSteelpillow (talk | contribs)Extended confirmed users, Pending changes reviewers38,153 edits RFC: comment← Previous edit Revision as of 17:03, 9 October 2014 edit undoCodename Lisa (talk | contribs)55,077 edits Survey on retention of "Human-like": SubversionNext edit →
Line 369: Line 369:
*:I don't know much about this subject area, but this compromise formulation is appealing to me. I can't comment on whether it has the advantage of being correct, but it does have the advantage of mentioning an aspect that might especially interesting to novice readers. ] (]) 04:43, 8 October 2014 (UTC) *:I don't know much about this subject area, but this compromise formulation is appealing to me. I can't comment on whether it has the advantage of being correct, but it does have the advantage of mentioning an aspect that might especially interesting to novice readers. ] (]) 04:43, 8 October 2014 (UTC)


*'''Support'''. RFC question is inherently faulty: There cannot be a valid consensus concerning exclusion a word from one arbitrarily numbered paragraph. One can easily add another paragraph to the article, or use the same word in another paragraph in manner that circumvents said consensus or use the same word in conjunction with negation. For instance, {{u|Robert McClenon}} seems not to endorse saying "AI is all about creating artificial human-like behavior." But doesn't that mean RM is in favor saying "AI is '''not''' all about creating human-like behavior"? Both sentences have "human-like" in them. RFC question must instead introduce a specific literature and ask whether it is acceptable or not. Best regards, ] (]) 11:39, 3 October 2014 (UTC) *<s>'''Support'''. RFC question is inherently faulty: There cannot be a valid consensus concerning exclusion a word from one arbitrarily numbered paragraph. One can easily add another paragraph to the article, or use the same word in another paragraph in manner that circumvents said consensus or use the same word in conjunction with negation. For instance, {{u|Robert McClenon}} seems not to endorse saying "AI is all about creating artificial human-like behavior." But doesn't that mean RM is in favor saying "AI is '''not''' all about creating human-like behavior"? Both sentences have "human-like" in them. RFC question must instead introduce a specific literature and ask whether it is acceptable or not. Best regards, ] (]) 11:39, 3 October 2014 (UTC)</s> {{Color|red|Struck my comment because someone has refactored the question, effectively subverting my answer. This is not the question to which I said "Support". This RFC looks weaker and weaker every minute.}} ] (]) 17:03, 9 October 2014 (UTC)
::His intent is clear from the mountain of discussion of the issue above. The question is should AI be ''defined'' as simulating human intelligence, or intelligence in general. ---- ] (]) 13:54, 4 October 2014 (UTC) ::His intent is clear from the mountain of discussion of the issue above. The question is should AI be ''defined'' as simulating human intelligence, or intelligence in general. ---- ] (]) 13:54, 4 October 2014 (UTC)
:::Yes, that's where the danger lies: To form a precedent which is not the intention of a mountain of discussions that came beforehand. Oh, and let me be frank: Even if no one disregarded that, I wouldn't help form a consensus on what is inherently a loophole that will come to hunt me down ... in good faith! ("In good faith" is the part that hurts most.) Best regards, ] (]) 19:31, 4 October 2014 (UTC) :::Yes, that's where the danger lies: To form a precedent which is not the intention of a mountain of discussions that came beforehand. Oh, and let me be frank: Even if no one disregarded that, I wouldn't help form a consensus on what is inherently a loophole that will come to hunt me down ... in good faith! ("In good faith" is the part that hurts most.) Best regards, ] (]) 19:31, 4 October 2014 (UTC)

Revision as of 17:03, 9 October 2014

This is the talk page for discussing improvements to the Artificial intelligence article.
This is not a forum for general discussion of the article's subject.
Article policies
Find sources: Google (books · news · scholar · free images · WP refs· FENS · JSTOR · TWL

Template:Vital article

Article milestones
DateProcessResult
August 6, 2009Peer reviewReviewed
This article has not yet been rated on Misplaced Pages's content assessment scale.
It is of interest to the following WikiProjects:
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconRobotics Top‑importance
WikiProject iconThis article is within the scope of WikiProject Robotics, a collaborative effort to improve the coverage of Robotics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.RoboticsWikipedia:WikiProject RoboticsTemplate:WikiProject RoboticsRobotics
TopThis article has been rated as Top-importance on the project's importance scale.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconTechnology
WikiProject iconThis article is within the scope of WikiProject Technology, a collaborative effort to improve the coverage of technology on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.TechnologyWikipedia:WikiProject TechnologyTemplate:WikiProject TechnologyTechnology
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconPhilosophy: Ethics / Science / Mind / Language High‑importance
WikiProject iconThis article is within the scope of WikiProject Philosophy, a collaborative effort to improve the coverage of content related to philosophy on Misplaced Pages. If you would like to support the project, please visit the project page, where you can get more details on how you can help, and where you can join the general discussion about philosophy content on Misplaced Pages.PhilosophyWikipedia:WikiProject PhilosophyTemplate:WikiProject PhilosophyPhilosophy
HighThis article has been rated as High-importance on the project's importance scale.
Associated task forces:
Taskforce icon
Ethics
Taskforce icon
Philosophy of science
Taskforce icon
Philosophy of mind
Taskforce icon
Philosophy of language
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconLinguistics: Philosophy of language High‑importance
WikiProject iconThis article is within the scope of WikiProject Linguistics, a collaborative effort to improve the coverage of linguistics on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.LinguisticsWikipedia:WikiProject LinguisticsTemplate:WikiProject LinguisticsLinguistics
HighThis article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is supported by Philosophy of language task force.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconComputing High‑importance
WikiProject iconThis article is within the scope of WikiProject Computing, a collaborative effort to improve the coverage of computers, computing, and information technology on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.ComputingWikipedia:WikiProject ComputingTemplate:WikiProject ComputingComputing
HighThis article has been rated as High-importance on the project's importance scale.
WikiProject iconCognitive science (inactive)
WikiProject iconThis article is within the scope of WikiProject Cognitive science, a project which is currently considered to be inactive.Cognitive scienceWikipedia:WikiProject Cognitive scienceTemplate:WikiProject Cognitive scienceCognitive science
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconSoftware: Computing High‑importance
WikiProject iconThis article is within the scope of WikiProject Software, a collaborative effort to improve the coverage of software on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.SoftwareWikipedia:WikiProject SoftwareTemplate:WikiProject Softwaresoftware
HighThis article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is supported by WikiProject Computing (assessed as High-importance).
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconComputer science Top‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Misplaced Pages. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.Computer scienceWikipedia:WikiProject Computer scienceTemplate:WikiProject Computer scienceComputer science
TopThis article has been rated as Top-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

Here are some tasks awaiting attention:
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconSystems High‑importance
WikiProject iconThis article is within the scope of WikiProject Systems, which collaborates on articles related to systems and systems science.SystemsWikipedia:WikiProject SystemsTemplate:WikiProject SystemsSystems
HighThis article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is within the field of Cybernetics.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
WikiProject iconReligion Top‑importance
WikiProject iconThis article is within the scope of WikiProject Religion, a project to improve Misplaced Pages's articles on Religion-related subjects. Please participate by editing the article, and help us assess and improve articles to good and 1.0 standards, or visit the wikiproject page for more details.ReligionWikipedia:WikiProject ReligionTemplate:WikiProject ReligionReligion
TopThis article has been rated as Top-importance on the project's importance scale.
WikiProject iconHuman–Computer Interaction (inactive)
WikiProject iconThis article is within the scope of WikiProject Human–Computer Interaction, a project which is currently considered to be inactive.Human–Computer InteractionWikipedia:WikiProject Human–Computer InteractionTemplate:WikiProject Human–Computer InteractionHuman–Computer Interaction
Template:WP1.0

Template:Outline of knowledge coverage


Archives
Archive 1Archive 2Archive 3
Archive 4Archive 5Archive 6
Archive 7Archive 8Archive 9
Archive 10Archive 11Archive 12
Archive 13Archive 14


This page has archives. Sections older than 100 days may be automatically archived by Lowercase sigmabot III.

On going issues

Length

I argue that this is WP:Summary article of a large field, and that therefor it is okay that it runs a little long. Currently, the article text is at around ten pages, but the article is not 100% complete and needs more illustrations. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Todo: Illustration

The article needs a lead illustration and could use more illustrations throughout. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Thanks to User:pgr94, the article is 70% illustrated. Almost there. ---- CharlesGillingham (talk) 00:03, 16 June 2011 (UTC)
Main illustration doesn't provide an actual example of an Artificial Intelligence, just a robot capable of mimicking human actions in a certain area (Namely, sport) — Preceding unsigned comment added by 86.163.226.52 (talk) 15:37, 4 August 2011 (UTC)

Todo: Applications

The "applications" section does not give a comprehensive overview of the subject. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

Todo: Topics covered by major textbooks, but not this article

I can't decide if these are worth describing (in just a couple of sentences) or not. ---- CharlesGillingham (talk) 18:29, 2 November 2010 (UTC)

  1. Could use a tiny section on symbolic learning methods, such as explanation based learning, relevance based learning, inductive logic programming, case based reasoning.
  2. Could use a tiny section on knowledge representation tools, like semantic nets, frames, scripts etc.
  3. Control theory could use a little filling out with other tools used for robotics.
  4. Should mention Constraint satisfaction. (Under search). Discussion below, at Talk:Artificial intelligence/Archive 4#Constraint programming.
  5. Should mention the Frame problem in a footnote at least. ---- CharlesGillingham (talk) 19:52, 3 February 2011 (UTC)

Todo: redlinks and tags

  1. Where can we link Belief calculus? Does this include Dempster-Shafer theory (according to R&N)? I think that's more or less deprecated. Does R&N include expectation-maximization algorithm as a kind of belief calculus? I don't think so. Where is this in Misplaced Pages?
  2. There are still several topics with no source: Subjective logic, Game AI, etc. All are tagged in the article. ---- CharlesGillingham (talk) 19:59, 3 February 2011 (UTC)

Goals

I think a high level listing of AI's goals (from which more specific Problems inherit) is needed; for instance "AI attempts to achieve one or more of: 1) mimicking living structure and/or internal processes, 2) replacing living thing's external function, using a different internal implementation, 3) ..." At one point in the past, I had 3 or 4 such disjoint goals stated to me by someone expert in AI. I am not, however. DouglasHeld (talk) 00:11, 26 April 2011 (UTC)

We'd need a reliable source for this, such as a major AI textbook. ---- CharlesGillingham (talk) 16:22, 26 April 2011 (UTC)

"Human-like" intelligence

I object to the phrase "human-like intelligence" being substituted here and elsewhere for "intelligence". This is too narrow and is out of step with the way many leaders of AI describe their own work. This only describes the work of a small minority of AI researchers.

  • AI founder John McCarthy (computer scientist) argued forcefully and repeatedly that AI research should not attempt to create "human-like intelligence", but instead should focus on create programs that solve the same problems that humans solve by thinking. The programs don't need to be human-like at all, just so long as they work. He felt AI should be guided by logic and formalism, rather than psychological experiments and neurology.
  • Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  • Stuart Russell and Peter Norvig (authors of the leading AI textbook) dismiss the Turing Test as irrelevant, because they don't see the point in trying to creating human-like intelligence. What we need is the intelligence it takes to solve problems, regardless of whether it's human-like or not. They write "airplanes are tested by how well they fly, not by how they can fool other pigeons into thinking they are pigeons."
  • They also object to John Searle's Chinese room argument, which claims that machine intelligence can never be truly "human-like", but at best can only be a simulation of "human-like" intelligence. They write "as long the program works, don't care if you call it a simulation or not." I.e., they don't care if it's human-like.
  • Russell and Norvig define the field in terms of "rational agents' and write specifically that the field studies all kinds of rational or intelligent agents, not just humans.

AI research is primarily concerned with solving real-world problems, problems that require intelligence when they are solved by people. AI research, for the most part, does not seek to simulate "human like" intelligence, unless it helps to solve this fundamental goal. Although some AI researchers have studied human psychology or human neurology in their search for better algorithms, this is the exception rather than the rule.

I find it difficult to understand why we want to emphasize "human-like" intelligence. As opposed to what? "Animal-like" intelligence? "Machine-like" intelligence? "God-like" intelligence? I'm not really sure what this editor is getting at.

I will continue to revert the insertion "human-like" wherever I see it. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)

Completely agree. The above arguments are good. Human-like intelligence is a proper subset of intelligence. The editor seems to be confusing "Artificial human intelligence" and the much broader field of "artificial intelligence". pgr94 (talk) 10:12, 11 June 2014 (UTC)

One more thing: the phrase "human-like" is an awkward neologism. Even if the text was written correctly, it would still read poorly. ---- CharlesGillingham (talk) 06:18, 11 June 2014 (UTC)

To both editors, WP:MOS requires that the Lead section only contain material which is covered in the main body of the article. At present, the five items which you outline above are not contained in the main body of the article but only on Talk. The current version of the Lead section accurately summarizes the main body of the article in its current state. FelixRosch (talk) 14:54, 23 July 2014 (UTC)
The article (nor any of the sources) does not define AI by using the term "human like" to specify the exact kind of intelligence that it studies. Thus the addition of the term "human-like" absolutely does not summarize the article. I think the argument from WP:SUMMARY is actually a very strong argument for striking the term "human like".
I still don't understand the distinction between "human-like" intelligence and the other kind of intelligence (whatever it is), and how this applies to AI research. Your edit amounts to the claim that AI studies "human-like" intelligence and NOT some other kind of intelligence. It is utterly not clear what this other kind of intelligence is, and it certainly does not appear in the article or the sources, as far as I can tell. It would help if you explain what it is you are talking about, because it makes no sense to me and I have been working on, reading and studying AI for something like 34 years now. ---- CharlesGillingham (talk) 18:23, 1 August 2014 (UTC)
Also, see the intro to the section Approaches and read footnote 93. This describes specifically how some AI researchers are opposed to the idea of studying "human-like" intelligence. Thus the addition of "human-like" to the the intro not only does not summarize the article, it actually claims the opposite of what the body the article states, with highly reliable sources. ---- CharlesGillingham (talk) 18:34, 1 August 2014 (UTC)
That's not quite what you said in the beginning of this section. Also, your two comments on 1August seem to be at odds with each other. Either you are saying that there is nothing other than human-like intelligence, or you wish to introduce material to support the opposite. If you wish to develop the material into the body of the article following your five points at the start of this section, then you are welcome to try to post them in the text prior to making changes in the Lead section. WP policy is that material in the Lede must be first developed in the main body of the article, which you have not done. FelixRosch (talk) 16:35, 4 September 2014 (UTC)
As I've already said, the point I am making is already in the article.
"Human-like" intelligence is not in the article. Quite the contrary.
The article states that this is long standing question that AI research has not yet answered: "Should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?"
And the accompanying footnote makes the point in more detail:
"Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3 harvnb error: no target: CITEREFRussellNorvig2003 (help), who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101 harvnb error: no target: CITEREFMcCorduck2004 (help), who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982 harvnb error: no target: CITEREFKolata1982 (help), a paper in Science, which describes McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real". McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006) harv error: no target: CITEREFMaker2006 (help)."
This proves that the article does not state that AI studies "human like" intelligence. It states, very specifically, that AI doesn't know whether to study human-like intelligence or not. ---- CharlesGillingham (talk) 03:21, 11 September 2014 (UTC)

Human-like intelligence is the subject of each of the opening eight sections including "Natural language"

As the outline of this article plainly shows in its opening eight sections, each one of the eight sections of this page are all explicitly for 'human-like' intelligence. This fact should be reflected in the Lede as well. The first eight section are all devoted to human-like intelligence. In the last few weeks you have taken several differing positions. First you were saying that there is nothing other than human-like intelligence, then you wished to introduce multiple references to support the opposite, and now you appear to wish to defend an explicitly strong-AI version of your views against 'human-like' intelligence. You are expected on the basis of good faith to make you best arguments up front. The opening eight sections are all devoted to human-like intelligence, even to the explicit numbering of natural language communication into the list. There is no difficulty if you wish to write your own new page for "Strong-AI" and only Strong-AI. If you like, you can even ignore the normative AI perspective on your version of a page titled "Strong-AI". That however is not the position which is represented on the general AI page which is predominantly in its first eight sections oriented to human-like intelligence. FelixRosch (talk) 16:18, 11 September 2014 (UTC)

(Just to be clear: (1) I did not say there is nothing other than human-like intelligence. I don't know where you're getting that. (2) I find it difficult to see how you could construe my arguments as being in favor of research into "strong AI" (as in artificial general intelligence) or as an argument that machines that behave intelligently must also have consciousness (as in the strong AI hypothesis). As I said in my first post, AI research is about solving problems that require intelligence when solved by people. And more to the point: the solutions to these problems are not, in general, "human-like". This is the position I have consistently defended. (3) I have never shared my own views in this discussion, only the views expressed by AI researchers and this article. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC))
Hello Felix. My reading of the sections is not the same. Could you please quote the specific sentences you are referring to. I have reverted your edit as it is rather a narrow view of AI that exists mostly in the popular press, not the literature. pgr94 (talk) 18:28, 11 September 2014 (UTC)
Hello Pgr94; This is the list of the eight items which start off the article: 2.1 Deduction, reasoning, problem solving 2.2 Knowledge representation 2.3 Planning 2.4 Learning 2.5 Natural language processing (communication) 2.6 Perception 2.7 Motion and manipulation 2.8 Long-term goals. Each of these items is oriented to human-like intelligence. I have also emphasized 2.5, Natural language processing, as specifically unique to human alone. Please clarify if this is the same outline that should appear on your screen. Of the three approaches to artificial intelligence, weak-AI, Strong-AI, and normative AI, you should specify which one you are endorsing prior to reverting. My point is that the Lede should be consistent with the body of the article, and that it should not change until the new material is developed in the main body of the article before changing the Lede. Human-like intelligence are what all the opening 8 sections are about. Make Lede consistent with the contents of the article following WP:MoS. FelixRosch (talk) 20:11, 11 September 2014 (UTC)
It seems you just listed the sections rather than answer my query. Never mind.
The article is not based on human-like intelligence as you seem to be suggesting. If you look at animal cognition you will see that reasoning, planning, learning and language are not unique to humans. Consider also swarm intelligence and evolutionary algorithms that are not based on human behaviour. To say that the body of the article revolves around human-like intelligence is therefore inaccurate.
If you still disagree with both Charles and myself, may I suggest working towards consensus here before adding your change as I don't believe your change to the lede reflects the body of the article. pgr94 (talk) 23:51, 11 September 2014 (UTC)
All of the intelligent behaviors you listed above can demonstrated by very "inhuman" programs. For example, a program can "deduce" the solution of a Sudoku puzzle by iterating through all of the possible combinations of numbers and testing each one. A database can "represent knowledge" as billions of nearly identical individual records. And so on. As for natural language processing, this includes tasks such as text mining, where a computer searches millions of web pages looking for a set of words and related grammatical structures. No human could do this task; a human would approach the problem a completely different way. Even Siri's linguistic abilities are based mostly on statistical correlations (using things like support vector machines or kernel methods) and not on neurology. Siri depends more on the mathematical theory of optimization than it does on our understanding of the way the brain processes language. ---- CharlesGillingham (talk) 05:19, 12 September 2014 (UTC)
@Pgr94; Your comment appears to state that because there are exceptions to the normative reading of AI, therefore you can justify changes to the Lede to reflect these exceptions. WP:MoS is the exact opposite of this, where the Lede is required to give only a summary of material already used to describe the field covered in the main body of the article. No difficulty if you want to cover the exceptions in the main body of the article and you can go ahead and do so as long as you cite your additions according to wikipedia policy for being verifiable. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated for human-like intelligence. This article in its current form is overwhelmingly oriented to human-like intelligence applied normatively to establish the goals of AI. Arguing the exception can be covered in the main body but does not belong in the Lede according to wikipedia policy. @CharlesGillingham; You appear now to be devoted to the Strong-AI position to support your answers. This is only one version of AI, and it is not the one which is the principal one covered in the main baody of this article which covers the goal of producing human-link intelligence and its principal objectives. Strong-AI, Weak-AI, and normative AI are three versions, and one should not be used to bias attention away from what the main content of this article is about which is the normative AI approach as discussed in each of the opening 8 sections. The language used in section 2.1 is "that humans use when they solve puzzles...", and this is consistent for the other sections I have already enumerated. No difficulty if you want to bring in the material to support your preference for Strong-AI in the main body of the article. Until you do so the Strong-AI orientation should not affect what is represented in the Lede section. Misplaced Pages policy is that only material in the main body of the article may be used in the Lede. FelixRosch (talk) 16:10, 12 September 2014 (UTC)
I have no idea what you mean by "Strong AI" in the paragraph above. I am defending the positions of John McCarthy, Rodney Brooks, Peter Norvig and Stuart Russell, along with most modern AI researchers. These researchers advocate logic, nouvelle AI and the intelligent agent paradigm (respectively). All of these are about as far from strong AI as you can get, in either of the two normal ways the term is used. So I have to ask you: what do you mean when you say "strong AI"? It seems very strange indeed to apply it to my arguments.
I also have no idea what you mean by "normative AI" -- could you point to a source that defines "strong AI", "weak AI" and "normative AI" in the way you are using them? My definitions are based on the leading AI textbooks, and they seem to be completely different than yours.
Finally, you still have not addressed any of the points that Pgr94 and I have brought up -- if, as you claim, AI research is trying to simulate "human like" intelligence, why do most major researchers reject "human like" intelligence as a model or a goal, and why are so many of the techniques and applications based on principles that have nothing to do with human biology or psychology? ---- CharlesGillingham (talk) 04:02, 14 September 2014 (UTC)
You still have not responded to my quote in bold face above that the references in all 8 (eight) opening section of this article all refer to human comparisons. You should read them since you appear to be obviating the wording which they are using and as I have quoted it above. You now have two separate edits in two forms. These are two separate edits and you should not be automatically reverting them without discussion first. The first one is my preference and I can continue this Talk discussion until you start reading the actual contents of all eight opening sections which details human-like intelligence. The other edit is restored since there is no reason not to include the mention of the difference of general AI from strong AI and weak AI. Your comment on strong AI seems contradicted by your own editing of the very page (disambiguation page) for it. The related pages John Searle, etc, all are oriented to discussion of human comparisons of intelligence, as clearly stated on these links. Strong artificial intelligence, or Strong AI, may refer to:Artificial general intelligence, a hypothetical machine that exhibits behavior at least as skillful and flexible as humans do, and the research program of building such an artificial general intelligence, and, Computational theory of mind, the philosophical position that human minds are (or can be usefully modeled as) computer programs. This position was named "strong AI" by John Searle in his Chinese room argument. Each of these links supports human-like intelligence comparisons as basic to understating each of these terms. FelixRosch (talk) 15:21, 15 September 2014 (UTC)

All I'm saying is this: major AI researchers would (and do) object to defining AI as specifically and exclusively studying

"human-like" intelligence. They would prefer to define the field as studying intelligence in general, whether human or not. I have provided ample citations and quotations prove that this is the case. If you can't see that I have proved this point, then we are talking past each other. Repeatedly trying to add "human" or "human-like" or "human-ish" intelligence to the definition is simply incorrect.

I am happy to get WP:Arbitration on this matter, if you like, as long as it is understood that I only check Misplaced Pages once a week or so.

Re: many of the sections which define the problem refer to humans. This does not contradict what I am saying and does not suggest that Misplaced Pages should try to redefine the field in terms of human intelligence. Humans are the best example of intelligent behavior, so it is natural that we should use humans as an example when we are describing the problems that AI is solving. There are technical definitions of these problems that do not refer to humans: we can define reasoning in terms of logic, problem solving in terms of abstract rational agents, machine learning in terms of self-improving programs and so on. Once we have defined the task precisely and written a program that performs it to any degree, we're no longer talking about human intelligence any more -- we're talking about intelligence in general and machine intelligence in particular (which can be very "inhuman", as I demonstrated in an earlier post).

Re: strong AI. Yes, strong AI (in either sense) is defined in terms of human intelligence or consciousness. However, I am arguing that major AI researchers would prefer not to use "human" intelligence as the definition of the field, a position which points in the opposite direction from strong AI; the people I am arguing on behalf of are generally uninterested in strong AI (as Russell and Norvig write "most AI researchers don't care about the strong AI hypothesis"). So it was weird that you wrote I was "devoted to the Strong-AI position". Naturally, I wondered what on earth you were talking about.

The term "weak AI" is not generally used except in contrast to "strong AI", but if we must use it, I think you could characterize my argument as defending "weak AI"'s claim to be part of AI. In fact, "strong AI research" (known as artificial general intelligence) is a very small field indeed, and "weak AI" (if we must call it that) constitutes the vast majority of research, with thousands of successful applications and tens of thousands of researchers. ---- CharlesGillingham (talk) 00:35, 20 September 2014 (UTC)

Undid revision 626280716. WP:MoS requires Lede to be consistent with the main body of the article. Previous version of Lede is inconsistent between 1st and 4th paragraph on human-like intelligence. Current version is consistent. Each one of the opening sections is also based one-for-one on direct emulation of human-like intelligence. You may start by explaining why you have not addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence. Also, please stop your personal attacks by posting odd variations on my reference to the emulation of human-like intelligence. Your deliberate contortion of this simple phrase to press your own minority view of weak-AI is against wikipedia policy. Page count statistics also appear to favor the mainstream version of human-like intelligence which was posted and not your minority weak-AI preference. Please stop edit warring, and please stop violating MoS policy and guidelines for the Lede. The first paragraph, as the fourth paragraph already is in the Lede, must be consistent and a summary of the material in the main body of the article, and not your admitted preference for the minority weak-AI viewpoint. FelixRosch (talk) 14:41, 20 September 2014 (UTC)
In response to your points above (1) I have "addressed the fact that each of the opening 8 (eight) sections is a direct comparison to human-like intelligence". It is in the paragraph above which begins with "Re: many of the sections which define the problem refer to humans.". (2) It's not a personal attack if I object every time you rephrase your contribution. I argue that the idea is incorrect and unsourced; the particular choice of words does not remove my objection. (3) As I have said before, I am not defending my own position, but the position of leading AI researchers and the vast majority of people in the field.
Restating my position: The precise, correct, widely accepted technical definition of AI is "the study and design of intelligent agents", as described in all the leading AI textbooks. Sources are in the first footnote. Leading AI researchers and the four most popular AI textbooks object to the idea that AI studies human intelligence (or "emulates" or "simulates" "human-like" intelligence).
Finally, with all due respect, you are edit warring. I would like to get WP:Arbitration. ----
I support getting arbitration. User:FelixRosch has not added constructively to this article and is pushing for a narrow interpretation of the term "artificial intelligence" which the literature does not support. Strong claims need to be backed up by good sources which Rosch has yet to do. Instead s/he appears to be cherrypicking from the article and edit warring over the lede. The article is not beyond improvement, but this is not the way to go about it. pgr94 (talk) 16:52, 20 September 2014 (UTC)
Pgr94 has not been part of this discussion for over a week, and the same suggestion is being made here, that you or CharlesG are welcome to try to bring in any cited material you wish to in order to support the highly generalized version of the Lede sentence which you appear to want to support. Until you bring in that material, WP:MoS is clear that the Lede is only supposed to summarize material which exists in the main body of the article. User:CharlesG keeps referring abstractly to multiple references he is familiar with and continues not to bring them into the main body of the article first. WP:MoS requires that you develop your material in the main body of the article before you summarize it in the Lede section. Without that material you cannot support an overly generalized version of the Lede sentence. The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). Also, the fourth paragraph in the Lede section now firmly states that the body of the article is based on human intelligence as the basis for the outline of the article and its contents. According to WP:MoS for the Lede, your new material must be brought into the main body of the article prior to making generalizations about it which you wish to place in the Lede section. FelixRosch (talk) 19:45, 20 September 2014 (UTC)
As I have said before, the material you are requesting is already in the article. I will quote the article again:
From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
  • Poole, Mackworth & Goebel 1998, p. 1 harvnb error: no target: CITEREFPooleMackworthGoebel1998 (help), which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
  • Russell & Norvig (2003) harvtxt error: no target: CITEREFRussellNorvig2003 (help) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55) harv error: no target: CITEREFRussellNorvig2003 (help).
  • Nilsson 1998 harvnb error: no target: CITEREFNilsson1998 (help)
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3 harvnb error: no target: CITEREFRussellNorvig2003 (help), who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101 harvnb error: no target: CITEREFMcCorduck2004 (help), who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982 harvnb error: no target: CITEREFKolata1982 (help), a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real". McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006) harv error: no target: CITEREFMaker2006 (help).
Comment All of these sources (and others; Rodney Brook's Elephants Don't Play Chess paper should also be cited) are part of a debate within the field that lasted from the 1960s to 90s, and was mostly settled by the "intelligent agent" paradigm. The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This field defines itself in terms human intelligence. The field of AI, as a whole, does not.
This article has gone to great pains to stay in synch with leading AI textbooks, and the leading AI textbook addresses this issue (see chpt. 2 of Russell & Norvig's textbook), and comes down firmly against defining the field in terms of human intelligence. Thus "human" does not belong in the lead.
I have asked for dispute resolution. ---- CharlesGillingham (talk) 19:07, 21 September 2014 (UTC)

Arbitration ?

Why is anyone suggesting that arbitration might be in order? Arbitration is the last step in dispute resolution, and is used when user conduct issues make it impossible to resolve a content dispute. There appear to be content issues here, such as whether the term "human-like" should be used, but I don't see any evidence of conduct issues. That is, it appears that the editors here are being civil and are not engaged in disruptive editing. I do see that a thread has been opened at the dispute resolution noticeboard, an appropriate step in resolving content issues. If you haven't tried everything else, you don't want arbitration. Robert McClenon (talk) 03:08, 21 September 2014 (UTC)

You're right, dispute resolution is the next step. I have opened a thread. (Never been in a dispute that we couldn't resolve ourselves before ... the process is unfamiliar to me.) ---- CharlesGillingham (talk) 19:08, 21 September 2014 (UTC)
I am now adding an RFC, below. ---- CharlesGillingham (talk) 04:58, 23 September 2014 (UTC)

Alternate versions of lede

In looking over the recent discussion, it appears that the basic question is what should be in the article lede paragraph. Can each of the editors with different ideas provide a draft for the lede? If the issue is indeed over what should be in the lede, then perhaps a content Request for Comments might be an alternative to formal dispute resolution. Robert McClenon (talk) 03:24, 21 September 2014 (UTC)

Certainly. I would like the lede to read more or less as it has since 2008 or so:

Artificial intelligence (AI) is the intelligence exhibited by machines or software. It is also an academic field of study. Major AI researchers and textbooks define this field as "the study and design of intelligent agents", where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success. John McCarthy, who coined the term in 1955, defines it as "the science and engineering of making intelligent machines".

---- CharlesGillingham (talk) 19:12, 21 September 2014 (UTC)

We can nit pick this stuff to death and I'm already resigned that the lede isn't going to be exactly what I think it should be. BTW, some of my comments yesterday were based on my recollection of an older version of the lede, there was so much back and forth editing. I can live with the lede as it currently is but I don't like the word "emulating". To me "emulating" still implies we are trying to do it the way humans do. E.g., when I emulate DOS on a Windows machine or emulate Lisp on an IBM mainframe. When you emulate you essentially define some meta-layer and then just run ths same software and you trick it into thinking it's running on platform Y rather than X. I would prefer words like designing or something like that. But it's a minor point. I'm not going to start editing myself because I think there are already enough people going back and forth on this so just my opinion. --MadScientistX11 (talk) 15:20, 30 September 2014 (UTC)

Follow-Up

Based on a comment posted by User:FelixRosch at my talk page, it appears that the main issue is whether the first sentence of the lede should include "human-like". If that is the issue of disagreement, then the Request for Comments process is appropriate. The RFC process runs for 30 days unless there is clear consensus in less time. Formal dispute resolution can take a while also. Is the main issue the word "human-like"? Robert McClenon (talk) 15:12, 22 September 2014 (UTC)

Yes that is the issue. ---- CharlesGillingham (talk) 16:59, 22 September 2014 (UTC)
I have a substantive opinion, and a relatively strong substantive opinion, but I don't want to say what it is at this time until we can agree procedurally on how to settle the question. I would prefer the 30-day semi-automated process of an RFC rather than the formality of mediation-like formal dispute resolution, largely because it gets a better consensus via publishing the RFC in the list of RFCs and in random notification of the RFC by the bot. Unless anyone has a reason to go with mediation-like dispute resolution, I would prefer to get the RFC moving. Robert McClenon (talk) 21:41, 22 September 2014 (UTC)
I am starting the rfc below. As I said in the dispute resolution, I've never had a problem like this before. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)

RfC: Should this article define AI as studying/simulating "intelligence" or "human-like intelligence"?

Deleting RFC header as per discussion. New RFC will be posted if required. Robert McClenon (talk) 15:03, 1 October 2014 (UTC)

Argument in favor of "intelligence"

The article should define AI as studying "intelligence" in general rather than specifically "human-like intelligence" because

  1. AI founder John McCarthy (computer scientist) writes "AI is not, by definition, a simulation of human intelligence", and has argued forcefully and repeatedly that AI should not simulate human intelligence, but should focus on solving problems that people use intelligence to solve.
  2. The leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach defines AI as the "the study and design of rational agents", a term (like the more common term intelligent agent) which is carefully defined to include simple rational agents like thermostats and complex rational agents like firms or nations, as well as insects, human beings, and other living things. All of these are "rational agents", all them provide insight into the mechanism of intelligent behavior, and humans are just one example among many. They also write that the "whole-agent view is now widely accepted in the field."
  3. Rodney Brooks (leader of MIT's AI laboratories for many years) argued forcefully and repeatedly that AI research (specifically robotics) should not attempt to simulate human-like abilities such as reasoning and deduction, but instead should focus on animal-like abilities such as survival and locomotion.
  4. The majority of successful AI applications do not use "human-like" reasoning, and instead rely on statistical techniques (such as bayesian nets or support vector machines), models based the behavior of animals (such as particle swarm optimization), models based on natural selection, and so on. Even neural networks are an abstract mathematical model that does not typically simulate any part of a human brain. The last successful approach that modeled human reasoning were the expert systems of the 1980s, which are primarily of historical interest. Applications based on human biology or psychology do exist and may one day regain the center stage (consider Jeff Hawkins' Numenta, for one), but as of 2014, they are on the back burner.
  5. From the 1960s to the 1980s there was some debate over the value of human-like intelligence as a model, which was mostly settled by the all-inclusive "intelligent agent" paradigm. (See History of AI#The importance of having a body: Nouvelle AI and embodied reason and History of AI#Intelligence agents.) The exceptions would be the relatively small (but extremely interesting) field of artificial general intelligence research. This sub-field defines itself in terms human intelligence, as do some individual researchers and journalists. The field of AI, as a whole, does not.

All of these points are made in the article, with ample references:

From the lede: Major AI researchers and textbooks define this field as "the study and design of intelligent agents"
First footnote: Definition of AI as the study of intelligent agents:
  • Poole, Mackworth & Goebel 1998, p. 1 harvnb error: no target: CITEREFPooleMackworthGoebel1998 (help), which provides the version that is used in this article. Note that they use the term "computational intelligence" as a synonym for artificial intelligence.
  • Russell & Norvig (2003) harvtxt error: no target: CITEREFRussellNorvig2003 (help) (who prefer the term "rational agent") and write "The whole-agent view is now widely accepted in the field" (Russell & Norvig 2003, p. 55) harv error: no target: CITEREFRussellNorvig2003 (help).
  • Nilsson 1998 harvnb error: no target: CITEREFNilsson1998 (help)
Comment: Note that an intelligent agent or rational agent is (quite deliberately) not just a human being. It's more general: it can be a machine as simple a thermostat or as complex as a firm or nation.
From the section
Approaches:
A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?
From the corresponding
footnote
Biological intelligence vs. intelligence in general:
  • Russell & Norvig 2003, pp. 2–3 harvnb error: no target: CITEREFRussellNorvig2003 (help), who make the analogy with aeronautical engineering.
  • McCorduck 2004, pp. 100–101 harvnb error: no target: CITEREFMcCorduck2004 (help), who writes that there are "two major branches of artificial intelligence: one aimed at producing intelligent behavior regardless of how it was accomplioshed, and the other aimed at modeling intelligent processes found in nature, particularly human ones."
  • Kolata 1982 harvnb error: no target: CITEREFKolata1982 (help), a paper in Science, which describes John McCathy's indifference to biological models. Kolata quotes McCarthy as writing: "This is AI, so we don't care if it's psychologically real". McCarthy recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006) harv error: no target: CITEREFMaker2006 (help).

FelixRosch has succeeded in showing that human-like intelligence is interesting to AI research, but not that it defines AI research. Defining artificial intelligence as studying/simulating "human-like intelligence" is simply incorrect; it is not how the majority of AI researchers, leaders and major textbooks define the field. ---- CharlesGillingham (talk) 05:54, 23 September 2014 (UTC)

Comments

I fully support the position presented by User:CharlesGillingham.

User:FelixRosch says The article in its current form, in all eight (8) of its opening sections is oriented to human-like intelligence (Sections 2.1, 2.2, ..., 2.8). I fail to see how sections 2.3 planning, 2.4 learning, 2.6 perception, 2.7 motion and manipulation relate only to humans. Could you please quote the exact wording in each of these sections that give you this impression? pgr94 (talk) 22:18, 23 September 2014 (UTC)

User:FelixRosch says "User:CharlesG keeps referring abstractly to multiple references on the article Talk page (and in this RfC) which he is familiar with, and continues not to bring them into the main body of the article first". The references are already in the article. The material in the table above is cut-and-pasted from the article. ---- CharlesGillingham (talk) 06:12, 24 September 2014 (UTC)

I support the position in favor of "intelligence", for the raisons stated by User:CharlesGillingham. Pintoch (talk) 07:27, 24 September 2014 (UTC)

The origins of the artificial intelligence discipline did largely have to do with "human-like" intelligence. However, much of modern AI, including most of its successes, have had to do with various sorts of non-human intelligence. To restrict this article only to efforts (mostly unsuccessful) at human-like intelligence would be to impoverish the article. Robert McClenon (talk) 03:16, 25 September 2014 (UTC)

  • If I'm understanding the question correctly the answer is obvious. AI is absolutely not just about studying "human like" intelligence but intelligence in general, which includes human like intelligence. I mean there are whole sub-disciplines of AI, the formal methods people in particular, who study mathematical formalisms that are about how to represent logic and information in general not just human intelligence. To pick one specific example: First Order Logic. People are all over the map on how much FOL relates to human intelligence. Some would say very much others would say not at all but I don't think anyone who has worked in AI would deny that FOL and languages based on it are absolutely a part of AI. Or another example is Deep Blue. It performs at the grand master level but some people would argue the way it computes is very different than the way a human does and -- at least in my experience -- the people who code programs like Deep Blue don't really care that much either way, they want to solve hard problems as effectively as possible. The analogy I used to hear all the time was that AI is to human cognition as aeronautics is to bird flight. An aeronautics engineer may study how birds fly in order to better design a plane but she will never be constrained by how birds do it because airplanes are fundamentally different and the same with computers, human intelligence will definitely impact how we design smart computers but it won't define it and AI researchers are not bound to stay within the limits of how humans solve problems. --MadScientistX11 (talk) 15:14, 25 September 2014 (UTC)
  • Comment The sources cited by CharlesGillingham do not all contradict the article. Apparently the current wording is confusing, but:
  • "Human-like" need not be read as "simulating humans". It can also be read as "human-level", which is typically the (or a) goal of AI. Speaking from my field of expertise, all the Bayes nets and neural nets and graphical models in the world are still trying to match hand-labeled examples, i.e. obtain human-level performance, even though the claim that they do anything like human brains is very, very strenuous. (Though I can point you to recent papers in major venues where this is still used as a selling point. Here's one of them.)
  • More importantly, the article speaks of emulating, not simulating, intelligence. Citing Wiktionary, emulation is "The endeavor or desire to equal or excel someone else in qualities or actions" (I don't have my Oxford Learner's Dictionary near, but I assure you the definition will be similar). In other words, emulation can exceed the qualities of the thing being emulated, so there's no need to stop at a human level of performance a priori; and emulation does not need to use "human means", or restrict itself to "cognitively plausible" ways of achieving its goal.
  • The phrase "though other variations of AI such as strong-AI and weak-AI are also studied" seems to have been added by someone who didn't understand the simulation-emulation distinction. I'll remove this right away, as it also just drops two technical terms on the reader without introducing them or explaining the (perceived) distinction with the emulation of human-like intelligence.
I conclude that the RFC is based on false premises (but I have no objection against a better formulation that is more in line with reliable sources). QVVERTYVS (hm?) 09:01, 1 October 2014 (UTC)
The simulation/emulation distinction that you are making does not appear in our most reliable source, Russell and Norvig's Artificial Intelligence: A Modern Approach (the most popular textbook on the topic). They categorize definitions of AI along these orthogonal lines: acting/"thinking" (i.e. behavior vs. algorithm), human/rational (human emulation vs. directed at defined goals), and they argue that AI is most usefully defined in terms of "acting rationally". The same section describes the long-term debate over the definition of AI. (See pgs. 1-5 of the second edition). The argument against defining AI as "human-like" (see my post at the top of the first RfC) is that R&N, as well as AI leaders John McCarthy (computer scientist) and Rodney Brooks all argue that AI should NOT be defined as "human-like". While this does not represent a unanimous consensus of all sources, of course, nevertheless we certainly can't simply bulldoze over the majority opinion and substitute our own. Cutting the word "human-like" gives us a definition that everyone would find acceptable. ---- CharlesGillingham (talk) 00:51, 9 October 2014 (UTC)

Argument in favor of "human-like intelligence"

Please Delete or Strike RFC

I am requesting that the author of the RFC delete or strike the RFC, because it is non-neutral in its wording. Robert McClenon (talk) 22:30, 23 September 2014 (UTC)

Just as the issue with the article is only with its first sentence, my issue with the RFC is with its first sentence. Robert McClenon (talk) 22:31, 23 September 2014 (UTC)
Let's face it, I don't know how to do this .... is the format above acceptable? Can you help me fix it? ---- CharlesGillingham (talk) 05:32, 24 September 2014 (UTC)
Much better. Robert McClenon (talk) 03:12, 25 September 2014 (UTC)

AI definition: What is "Strong" vs. "Weak" AI and where is it referenced?

The current definition of AI contrasts "strong" vs "weak" AI. I'm not familiar with that distinction. Who makes it and where is it referenced. Also, as a meta-point I've noticed there seems to be a lot of deference to the Russel and Norvig book on AI. That is only one book and neither guy has the standing of people who have also written general AI books such as Patrick Wilson, Feigenbaum's AI handbook, and other. Here is Wilson's definition: "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1. I think such a simple definition is what we should use to start the article --MadScientistX11 (talk) 14:59, 29 September 2014 (UTC)

I just saw that Russel and Norvig have a definition of strong vs. weak AI (section 1.5 p29). Their definition is strong AI think that machines can be conscious weak AI thinks they can't. That is a very different definition than is what is currently in the intro text. First of all I think the whole distinction is unimportant anyway. It matters to people for whom AI is a purely academic discipline but the people who actually do AI, who build expert systems, ontologies, etc. and use them in the real world don't care one way or the other. I think that part of the intro definitely needs to be changed. I don't think the strong vs. weak is important enough to be mentioned so early on but if it is it should at least be consistent with the definition of R&N --MadScientistX11 (talk) 15:58, 29 September 2014 (UTC)
I agree and have removed this sentence once again. (It may be re-added by FelixRosch shortly, assuming things continue to happen as they have been happening these last few weeks.)
I agree that (1) undefined terms such as strong AI or weak AI should not be in second sentence because they have not yet been defined for the reader. Defining them correctly would take too much space to put in the lede, thus these terms can't be in the lede. (2) The distinction between different kinds of AI is not important at this point. The highest priority is the widely-accepted academic definition of AI (sentence 3) and the definition intended by the man who coined the term (sentence 4). These are much higher priorities. (3) The term "strong AI" only appears in the article at two points in the article (once as a synonym for artificial general intelligence, and once as the philosophical position identified by John Searle as the strong AI thesis). Thus, given all the material we have to cover in this article, this a relatively unimportant topic, and does not belong in the lede because it summarizes such a small fraction of the material in the article. ---- CharlesGillingham (talk) 02:05, 30 September 2014 (UTC)
Not surprisingly, the sentence with strong AI/weak AI has been re-added by Felix Rosch. Feel free to remove it if you agree with me that the sentence doesn't work. ---- CharlesGillingham (talk) 17:00, 30 September 2014 (UTC)

Deep Learning

I've noticed a couple of reverts on this topic. I agree with the people who don't want Deep Learning as a separate major sub-heading. If you look at the major AI text books none of them have a chapter heading "Deep Learning" to my recollection. AI is such a broad topic we need to be sure to not try and have this article cover every single thing that has ever been described as AI but stick to the major topics. Deep learning merits it's own article and a link from this article to it but not a whole section in this article. Rather than just keep reverting I think we should try to reach some consensus first and the advocates of Deep Learning should cite some major AI text books that have it as a major topic or say why they think that is not an appropriate criteria for what things should be covered in this article. --MadScientistX11 (talk) 15:50, 29 September 2014 (UTC)

"Deep learning" is not mentioned in the leading AI textbook, Russell and Norvig's Artificial Intelligence: A Modern Approach. This is why I removed this material, as a five-paragraph section is WP:UNDUE weight for a relatively minor topic. One sentence in the section on neural networks would be appropriate, if anything.
I noticed FelixRosch has reverted my removal ... ---- CharlesGillingham (talk) 02:07, 30 September 2014 (UTC)
@Felix -- if you would like to make an argument against me, now is the time. I will be removing deep learning section eventually, unless you can provide a convincing argument that it is four times as important as statistical AI, twelve times as important as neural networks, fuzzy computation and evolutionary computation, or equally important to the history of AI as symbolic AI. This is the weight this article gives to these sub-fields.
I also remind you that the only thing that counts here is reliable sources, and "deep learning" does not appear in the 1000+ pages of the most popular AI textbook. ---- CharlesGillingham (talk) 03:54, 8 October 2014 (UTC)

Some definitions of AI

Since we still seem to need a consensus on how to define AI I thought it would be worthwhile to just post a few from some of the classic text books:

  • "Artificial Intelligence is the study of ideas that enable computers to be intelligent" from Artificial Intelligence by Patrick Wilson p.1.
  • "The field of artificial intelligence, or AI, attempts to understand intelligent entities. Thus, one reason to study it is to learn more about ourselves. But unlike philosophy and psychology, which are also concerned with intelligence, AI strives to build intelligent entities as well as understand them. Another reason to study AI is that these constructed intelligent entities are interesting and useful in their own right" Russel and Norvig AI A Modern Approach p. 3
  • "Artificial Intelligence is the part of computer science concerned with designing intelligent computer systems, that is systems we associate with intelligence in human behavior: understanding language, learning, reasoning..." AI Handbook Barr and Feigenbaum https://archive.org/stream/handbookofartific01barr#page/n19/mode/2up

I like the Barr and Feigenbaum definition the best. Note two things though, EVERYONE describes it as "the study of" not as the intelligence itself, that is in contrast with the definition here and two NONE of them say anything about being contstrained by the way humans solve problems. Again, I like the Feigenbaum one best because it makes the valid point which is similar to what is there now but importantly different, making computers do things that are thought of as human intelligence IS AI but not being constrained by the WAY humans do those things. --MadScientistX11 (talk) 16:29, 29 September 2014 (UTC)

These are definitions of the academic field of AI research, i.e. "the study of". I am fine with restricted the definition to only describe the academic field, if everyone thinks that's best. Some years ago, we had something like this "Artificial intelligence is a branch of computer science which studies intelligent machines and software," i.e., the definition was strictly about the academic field.
I think that there are actually two other uses of the term outside of the academic AI, but we can choose to ignore this if we want, because the article is definitely about academic AI, and not about science fiction or other popular sources. The other two uses are: (2) the intelligence of machines or software (3) an intelligent machine or program (this usage is common in gaming and science fiction). The article for the last several years has started with (2) and ignored (3).
Feel free to try to fix this. ---- CharlesGillingham (talk) 02:26, 30 September 2014 (UTC)
When you say "the article is about academic AI" that's partly true but AI is one of those concepts like distributed computing that has both a strong academic and a strong industry flavor. My background is in both btw, I've worked in the AI group of a Major Consulting firm as well as doing research for DARPA and USAF. And where I'm coming from with some of my comments is more from the industry side. It's my industry experience that makes me say that the whole "is it just about human intelligence" is just a no brainer. People who aren't academics NEVER think like that in my experience, they want to build smart systems that solve hard problems and they will use any technique that works best. --MadScientistX11 (talk) 15:11, 30 September 2014 (UTC)
Sure -- it's about mainstream academic and industrial AI, as opposed to pop-science, science fiction and any of those thousands of "pet theories" and "alternative forms" of AI.
As I said before, feel free to rewrite the first couple of sentences any way that makes sense to you; it seems like you know what you're talking about. I'd like to keep the intelligent agent/rational agent definition and McCarthy's quote. The simple definition for the lay reader can go any way you think is best. ---- CharlesGillingham (talk) 16:58, 30 September 2014 (UTC)
The definition we quote in the intro is from Poole, Mackworth & Goebel 1998, p. 1. I like it because it's from a popular textbook, it's concise, to the point, does not equivocate, does not raise any unnecessary complications and finds a way to define AI that does not require also defining human intelligence, sidestepping all possible philosophical and technical objections. ---- CharlesGillingham (talk) 02:38, 30 September 2014 (UTC)

What Needs Discussing?

There seems to have been too much reverting in the past few days. Let's identify the issues. There is disagreement as to whether to include a paragraph on "deep learning". There is disagreement on whether to mention "strong AI" and "weak AI". I think that strong and weak should be mentioned in the article, but not in the lede, but that is only my opinion. What other disagreements are there, besides the "human-like" question that is being decided by RFC? Robert McClenon (talk) 02:00, 30 September 2014 (UTC)

Here is a summary of the current editorial issues.
1) An ongoing dispute about the lede, which has lasted several week. FelixRosch's latest contribution to the lede is this phrase: "which generally studies the goal of emulating human-like intelligence, though other variations of AI such as strong-AI and weak-AI are also studied." This phrase had been added and removed several times. I have three objections to this phrase:
"Human-like" intelligence
(Covered by the RfC above) There is an ongoing dispute that the term "human-like intelligence" should not be used to define AI.
Strong AI, weak AI
(Covered by the discussion started by MadScientist above) MadScientist and I both have objections to introducing these terms in the second sentence of the article.
The writing
And finally, in my opinion it is an awkward sentence, which reads poorly.
2) "Deep learning": (Covered by the discussion started by MadScientist above). The section added by FelixRosch about "Deep learning" is WP:UNDUE weight, in my opinion and MadScientist's. This section is copied and pasted from the article deep learning, and (I would argue) that is where it belongs. ---- CharlesGillingham (talk) 03:27, 30 September 2014 (UTC)
Deep Learning does seem like it now has undue weight. ... But, without that section it seems like AI techniques are almost entirely symbolic and strictly logical, which is also wrong. Is there a way to summarize Deep learning, traditional neural networks, and other more black-boxy techniques? APL (talk) 13:51, 30 September 2014 (UTC)
I would argue we have a consensus that Deep Learning has undue weight. As for the other issues: I also agree things like connectionist frameworks: Minsky, Pappert, Arbib, Churchland (those are the authors off the top of my head that I know, I don't know that part of the field though) needs more emphasis HOWEVER, I would strongly urge we table that. Let's sort out Deep Learning and the lede first and then move on to other issues. --MadScientistX11 (talk) 15:26, 30 September 2014 (UTC)
@APL: I don't agree with your reading of the "Approaches" sectio. Cybernetics (1930-1950s) and symbolic/logical/knowledgebased "GOFAI" (1960s-1980s) are presented as failed approaches that have been mostly superseded by newer approaches. Deep learning is one example of what the article calls statistical AI and sub-symbolic AI, as are all modern neural network methods.
As I said, I think that deep learning belongs in the section under Tools called Neural networks. It seem to me that deep learning (as described in Misplaced Pages) is one new neural network technique among the many that have been developed in the last decade. The neural network section mentions Jeff Hawkins' Hierarchical Temporal Memory approach to neural networks; it could also mention Hinton's deep learning if everyone thinks that's important. However, I have to say, I think it's possible to come up with at least dozen more examples of interesting new approaches to neural networks from the last decade, and we don't have room to mention them all. ---- CharlesGillingham (talk) 03:56, 1 October 2014 (UTC)
@APL & @MadScientist -- do you have any objection to moving your posts in to the section #Deep learning above? ---- CharlesGillingham (talk) 00:54, 9 October 2014 (UTC)

RFC

In further looking at the RFC, it is still non-neutral and has everyone confused. I would like to strike the RFC, and wait about 24 hours, and then create a new RFC with nothing but a question as to the lede sentence, and any other questions that are well-defined. Arguments in favor of a position can then be included as discussion. Unless anyone strongly objects, I will strike the RFC. (If anyone does object, we have to have an RFC on whether to strike the RFC. -:) ). Robert McClenon (talk) 14:55, 30 September 2014 (UTC)

My 2 cents is don't even bother making it an RFC. You end up getting a bunch of people who have little or no actual editing experience pontificating and going off on tangents. Just stick to a regular discussion in the talk section and try to keep it as focused as possible on specific editing questions. I think an RFC is overkill and that it slows down a real consensus and moving forward with actual editing which should be the goal. --MadScientistX11 (talk) 15:03, 30 September 2014 (UTC)
@Robert: I realize this is a lot to ask, but do you think you could start the RFC and help us figure out how to end this? As I've said before, I don't really understand why this dispute is continuing and why the normal standards of evidence are being ignored. I just want it to stop. How do we muster the necessary support to end this all-fronts total edit war? ---- CharlesGillingham (talk) 16:53, 30 September 2014 (UTC)
I guess I should spell this out a little more directly -- I'm trying to assume good faith here. What, exactly, does it take in order to allow us to remove the term "human-like" from the lede? We have a huge body of evidence that this is the right thing to do, absolutely no coherent evidence that it is wrong thing to do, a consensus of several editors here (including yourself) who agree that the term does not belong in the lede. However, every time I remove it from the lede, it gets restored by FelixRosche, thus I find myself in an edit war. I don't know what to do at this point.
I'm not sure exactly what's wrong with the RFC -- the question is clear, general and simple and the corresponding editorial choices are obvious. Is the problem that there is only one side presented? It seems to me that should be reason to end the issue as settled -- if FelixRosche doesn't care to make an argument, then let's be done with it. ---- CharlesGillingham (talk) 03:35, 1 October 2014 (UTC)
One last thought: editors should be aware that FelixRosche has added the term "human-like" back into the article many times, in many different forms, with many different edits. The RFC has to settle the issue of "human-like" in general, so that he doesn't just change the sentence again. (And I apologize if this seems to be bad faith; it's not -- I'm just betting the percentages here: "the best predictor of future behavior is past behavior"). ---- CharlesGillingham (talk) 04:02, 1 October 2014 (UTC)
I've struck "human-like" from the lede again. We need an RFC if User:FelixRosch actually is ready to argue that "human-like" should be somewhere in the lede. If he is willing to agree that it doesn't need to be in the lede, then we can leave it out. If he really wants it in, then we need some sort of resolution process to keep it out. I have argued in favor of RFC rather than DRN. Is he willing to leave human-like out of the lede, or does he really think it belongs, in which case we need an RFC? Robert McClenon (talk) 15:01, 1 October 2014 (UTC)
I am willing to formulate the RFC. The RFC itself will be brief and neutral. Arguments for or against "human-like" can be in the !votes or the discussion. In response to the comment that we may not need an RFC, I have asked User:FelixRosch on his talk page whether he is willing to agree that consensus is against the inclusion of "human-like" in the first paragraph. If he agrees, we don't have an issue. If he wants it in the first paragraph, then we should use either RFC or DRN, and I prefer RFC, because it receives wider attention. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)
The third and fourth paragraphs of the introduction to the article do include references to what is actually human-like intelligence. In particular, the third paragraph refers to artificial general intelligence, and the fourth paragraph refers to myth and fiction. My own opinion is that those references are satisfactory, and that the only real issue has to do with the first paragraph. If anyone objects to the third and fourth paragraphs, then we may need another part to the RFC. Robert McClenon (talk) 16:37, 1 October 2014 (UTC)
Your comment in the above seems to have missed the additions of the editor Qwertyus which are worthy of some consideration. I am supporting Qwertyus even though the suggestion abridges my edit substantially and am reverting to that version as offering a point of agreement between editors which was previously not available. In restoring the Qwertyus version, I shall also stipulate that if (If) it is acceptable to all involved editors, then I shall not pursue further changes to the first sentence of the Lede which has been debated. Second, if (If) the neutral Qwertyus edit is acceptable, then I will stipulate that I shall accept the abridgment to my second sentence in the first paragraph of the Lede as well with the dropping of the phrase dealing with weak AI and strong AI there. The rest of the material would need to remain in its Qwertyus form, and all editors can return to regular editing activities. My previous offer that both @CharlesG and @RobertM, as explicit supporters of weak-AI, will also still stand as an open invitation to them to further develop the sections and content in the main body of the article dealing with weak-AI. Your own supporter @MadScientist has even asked you, Where is it?, where is it? My edit here is to support Qwertyus as offering a useful edit. FelixRosch (talk) 14:29, 2 October 2014 (UTC)
It is not acceptable, of course, as I have argued above.
We do not need your permission "to return regular editing activities".
The term "weak AI" is never used in the way you are using it, so please don't call me a "supporter" of it. Do you mean "AI research into non-humanlike intelligence as well as human-like intelligence"? That would seem to follow from the position you hold. If so, then I must point out, for the third or fourth time, that most of the article is about what you call "weak AI". None of the topics is exclusively about human-like intelligence. Please read my earlier posts. ---- CharlesGillingham (talk) 09:01, 7 October 2014 (UTC)
Your comment appears to have missed the useful additions of editor @Qwerty. Your co-editor, @RobertM, has also declined all comment on this edit in preference to his posting a poorly formed RfC replacement for the previous defective RfC. Unless he joins this discussion or replaces/withdraws the currently poorly formed RfC, then it shall be difficult to respond. Your own version was posted as a full page ad for "Weak-AI" on the previous RfC. This discussion must be made on the basis of the current version of the article. FelixRosch (talk) 14:36, 7 October 2014 (UTC)
I am aware of Qwerty's contribution and I agree that is useful (especially in that he removed the misuse of the terms "strong AI" and "weak AI"). However, it does not change the fact that major AI textbooks and major AI researchers deliberately avoid defining artificial intelligence in terms of human intelligence, and that removing the word "human-like" does no harm to the article. I have proven this with solid evidence from all the major sources. Qwerty's actions are irrelevant in that he did not disprove these facts, and neither have you. ---- CharlesGillingham (talk) 18:10, 7 October 2014 (UTC)
That is still not a justification for an overly generalized version of the Lede section which is being supported by your co-editor User:RobertM and yourself on the poorly formed RfC below. Nor is your personal attack justified on @Qwerty calling those edits "irrelevant". Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 18:30, 7 October 2014 (UTC)
You are not reading my post very carefully. DO NOT accuse me of a personal attack -- I complimented Qwerty on his edit. His edit was fine, but the original, ongoing issue involves the term "human-like", and Qwerty's edit did not change this. There is no consensus for a version that says AI "generally studies the goal of emulating human-like intelligence." This is the issue at hand. I did not say that Qwerty's edit was irrelevant. It is your comments that are not helpful and that are avoiding the subject.
The most reliable mainstream source (Russell & Norvig) rejects the idea of emulating human-like intelligence as goal for AI. It doesn't matter what I think, or what you think, or what Robert thinks. This is not a vote, this is not an issue that we get to decide ourselves. It has already been decided by the mainstream AI sources. You have no basis for your argument, other than your own insistence.
And, as I have said before: this is not a position that I personally agree with. This is a position that the article must take, because it is the only one available from the most reliable source. We don't get to make up things here on Misplaced Pages and then just insist on them. ---- CharlesGillingham (talk) 03:28, 8 October 2014 (UTC)
Your personal attacks upon @Qwertyus must stop and calling him "irrelevant", to use your word, is not Misplaced Pages policy. You must also stop misrepresenting the case to admin @Redrose64 that your edit is unanimous since your poorly formulated RfC is against both User:Qwertyus and myself who support "emulation" as a fair summary of the article in its current state. @Redrose64 is an experienced editor who can explain your difficulties to you if you represent the matter as it is, and that your position is not unanimously supported in this poorly formed RfC. Please note that your co-editor RobertM is not joining you here to support you on this. FelixRosch (talk) 14:59, 8 October 2014 (UTC)
I did not call Qwertyus' edit, irrelevant to the article or to the topic, and certainly did not say that Qwertyus is irrelevant. I said it was irrelevant TO OUR DISPUTE about the term "human-like", which it obviously is because he neither removed nor added the term human like. QED. This will be the second time I have proved this, using plain English. I would prefer it if you would read my posts before responding. I'm finding it difficult to believe that you can't follow what I'm saying, and, if I assume good faith, I must also assume you are not reading them. ---- CharlesGillingham (talk) 21:30, 8 October 2014 (UTC)
And just to stay on point: the most reliable sources carefully and deliberately DO NOT define artificial intelligence as studying or emulating "human-like" intelligence, and this is an issue which many major AI researchers feel strongly about. Adding the term "human-like" to the lede is an insult to the hard work that these researchers have done to define their field. Misplaced Pages's editors do not have the right to define "artificial intelligence", so it does not matter what you think or what I think or what anyone thinks. What matters is the sources. ---- CharlesGillingham (talk) 21:37, 8 October 2014 (UTC)
Your personal attack against @Qwertyus was "Qwerty's actions are irrelevant", and your personal attack must stop. Are you now denying that this is a direct quote of your personal attack on another fellow editor? Also, to stay on point, your misrepresentation of your claim to "unanimous" support to admin must be withdrawn with full apology to the editor @Redrose for this misrepresentation. Your position is not unanimous, you are using an old outdated 2008 textbook for a high tech field, and your poorly formed RfC with your co-editor @RobertM promoting your preference for "Weak-AI" should be withdrawn. FelixRosch (talk) 16:32, 9 October 2014 (UTC)
May I cordially suggest to CharlesGillingham that you leave this rant, and any repetitions that follow, unanswered? The rest of us can all see for ourselves where it is coming from, there is no need to defend yourself against it. — Cheers, Steelpillow (Talk) 17:01, 9 October 2014 (UTC)

RFC on Phrase "Human-like" in First Paragraph

Please consider joining the feedback request service.
An editor has requested comments from other editors for this discussion. This page has been added to the following list: When discussion has ended, remove this tag and it will be removed from the list. If this page is on additional lists, they will be noted below.

Disclosure: The format and bias of this RfC is currently challenged and is currently being discussed. Any participation should be informed by pending changes or deletion of this RfC. FelixRosch (talk) 14:17, 3 October 2014 (UTC)

Should the phrase "human-like" be included in the first paragraph of the lede of this article as describing the purpose of the study of artificial intelligence? Robert McClenon (talk) 14:43, 2 October 2014 (UTC)

It is agreed that some artificial intelligence research, sometimes known as strong AI, does involve human-like intelligence, and some artificial intelligence research, sometimes known as weak AI, involves other types of intelligence, and these are mentioned in the body of the article. This survey has to do with what should be in the first paragraph. Robert McClenon (talk) 14:43, 2 October 2014 (UTC)

Survey on retention of "Human-like"

  • Oppose - The study of artificial intelligence has achieved considerable success with intelligent agents, but has not been successful with human-like intelligence. To limit the field to the pursuit of human-like intelligence would exclude its successes. Robert McClenon (talk) 14:46, 2 October 2014 (UTC) Inclusion of the restrictive phrase would implicitly exclude much of the most successful research and would narrow the focus too much. Robert McClenon (talk) 14:46, 2 October 2014 (UTC)
  • Oppose - At least as it's currently being used. Only some fields of AI strive to be human-like. (Either through "strong" AI, or through emulating a specific human behavior.) The rest of it is only "human-like" in the sense that humans are intelligent creatures. The goal of many AI projects is to perform make some intelligent decision far better than any human possibly could, or sometimes simply to do things differently than humans would. To define AI as striving to be "human like" is to encourage a 'Hollywood' understanding of the topic, and not a real understanding. (If "human-like" is mentioned father down the paragraph with the qualifier that *some* forms of AI strive to be human-like, that's fine, but it should absolutely not be used to define the field as a whole.) APL (talk) 15:21, 2 October 2014 (UTC)
  • Comment The division of emphasis is pretty fundamental. I would prefer to see this division encapsulated in the lead, perhaps along the lines of, "...an academic field of study which generally studies the goal of creating intelligence, whether in emulating human-like intelligence or not." — Cheers, Steelpillow (Talk) 08:45, 3 October 2014 (UTC)
    This is not a bad idea. It has the advantage of being correct. ---- CharlesGillingham (talk) 18:15, 7 October 2014 (UTC)
    I don't know much about this subject area, but this compromise formulation is appealing to me. I can't comment on whether it has the advantage of being correct, but it does have the advantage of mentioning an aspect that might especially interesting to novice readers. WhatamIdoing (talk) 04:43, 8 October 2014 (UTC)
  • Support. RFC question is inherently faulty: There cannot be a valid consensus concerning exclusion a word from one arbitrarily numbered paragraph. One can easily add another paragraph to the article, or use the same word in another paragraph in manner that circumvents said consensus or use the same word in conjunction with negation. For instance, Robert McClenon seems not to endorse saying "AI is all about creating artificial human-like behavior." But doesn't that mean RM is in favor saying "AI is not all about creating human-like behavior"? Both sentences have "human-like" in them. RFC question must instead introduce a specific literature and ask whether it is acceptable or not. Best regards, Codename Lisa (talk) 11:39, 3 October 2014 (UTC) Struck my comment because someone has refactored the question, effectively subverting my answer. This is not the question to which I said "Support". This RFC looks weaker and weaker every minute. Codename Lisa (talk) 17:03, 9 October 2014 (UTC)
His intent is clear from the mountain of discussion of the issue above. The question is should AI be defined as simulating human intelligence, or intelligence in general. ---- CharlesGillingham (talk) 13:54, 4 October 2014 (UTC)
Yes, that's where the danger lies: To form a precedent which is not the intention of a mountain of discussions that came beforehand. Oh, and let me be frank: Even if no one disregarded that, I wouldn't help form a consensus on what is inherently a loophole that will come to hunt me down ... in good faith! ("In good faith" is the part that hurts most.) Best regards, Codename Lisa (talk) 19:31, 4 October 2014 (UTC)
I don't understand this !vote. It appears to be a !vote against the RFC rather than against the exclusion of the term from the lead, in which case it belongs in the discussion section not in the survey section. Jojalozzo 22:27, 4 October 2014 (UTC)
Close, but no cigar. It is against the exclusion, but because of (not against) the RFC fault. Best regards, Codename Lisa (talk) 07:11, 5 October 2014 (UTC)
Is this vote just a personal opinion? Or do you have reliable sources? pgr94 (talk) 21:30, 8 October 2014 (UTC)
  • Oppose Please see detailed argument in the previous RfC. This is not how the most widely used AI textbooks define the field, and is not how many leading AI researches describe their work. ---- CharlesGillingham (talk) 13:52, 4 October 2014 (UTC)
  • Oppose that is not the place for such an affirmation. For that we should have an article on Human-like Artificial intelligence. Incidentally, I also support the objections to the form of this RFC.JonRichfield (talk) 05:16, 5 October 2014 (UTC)
  • Oppose WP policy is clear (WP:V, WP:NOR and WP:NPOV) and this core policy just needs to be applied in this case. The literature does not say human-like. Those wishing to add "human-like" need to follow policy. My understanding is that personal opinions and walls of text are irrelevant. Please note that proponents of the change have yet to provide a single source. pgr94 (talk) 21:20, 8 October 2014 (UTC)

Threaded discussion of RFC format

(Deleting my own edit which was intentionally distorted by RfC editor User:RobertM by re-titling its section and submerging it into the section for his own personal gain of pressing his bias for the "Weak-AI" position in this poorly formulated RfC.) FelixRosch (talk) 17:22, 6 October 2014 (UTC)

Interestingly, User:FelixRosch didn't object to a previous very non-neutrally worded RFC, but now chooses to object to a neutrally worded RFC simply because the editor publishing the RFC has a stated opinion. Interesting. Robert McClenon (talk) 20:15, 2 October 2014 (UTC)
I do not see how separating the two sections as you did and then !voting in both is preferable to the "AfD" style where both "supports" and "opposes" run together. Does anyone object to me refactoring the poll accordingly? VQuakr (talk) 03:47, 3 October 2014 (UTC)
It is fine with me to refactor as long as it doesn't change the result. Robert McClenon (talk) 11:00, 3 October 2014 (UTC)
Done. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)
The issue that I am trying to address has to do with the inclusion of the word "human-like" in the first paragraph in a limiting way, that is, defining the ultimate objective of artificial intelligence research as the implementation of human-like intelligence. The significance of the first paragraph is, of course, that it defines the scope of the rest of the article. I am willing to consider other ways to rework the first paragraph so that it recognizes both human-like and other forms of artificial intelligence. Robert McClenon (talk) 13:12, 3 October 2014 (UTC)

I was invited here randomly by a bot. (Though it also happens I have an academic AI background.) This RFC is flawed. Please read the RFC policy page before proceeding with this series of poorly framed requests. It makes no sense to me to have a section for including the term and separate section for excluding the term (should everyone enter an oppose and a support in each section?). The question should be something simple and straight forward like "Should "human-like" be included in the lead paragraph to define the topic." Then there should be a survey section where respondents can support or oppose the inclusion and a discussion section for stuff like this rant. Please read the policy page before digging this hole any deeper. Jojalozzo 22:35, 4 October 2014 (UTC)

Speaking as someone who has written a good deal of WP:RFC over the years, I'd like to drop by and clarify a few things:
  • Whatever you put in between the rfc template and the first timestamp is what people think the question is. Please don't put things like "the format of this RFC is being contested" as your "question". It looks like nonsense to people who are looking at Misplaced Pages:Requests for comment/Maths, science, and technology. No fancy formatting, please, just the actual question. What Robert has posted at the moment is fine.
  • The standard for a "neutral" question is "do your best". It is not "I get to reject any RFC (especially I'm 'losing') if I say the question is non-neutral". Frankly, the community expects RFC respondents to be capable of seeing through a non-neutral question and figuring out how to help improve the article.
  • There's nothing inherently wrong with separating support and oppose comments. It's not the most popular format for RFCs, but there is no rule against it. See Misplaced Pages:Requests for comment/Example formatting for other options, and pay careful attention to the words optional and voluntary (emphasis in the original) at the top of that page.
  • This isn't some sort of bureaucratic battle, where people can raise points of order and invoke rules to delay or interfere with the process. The point is to get useful information from a variety of people, to (ideally) make a decision, and to get back to normal editing. Or, to put it another way, an RFC is best approached as a minor variation on an everyday, normal talk-page discussion. The fancy coffee-roll-colored banner is just a sign that extra people are being encouraged to join the discussion. Everyday rules apply: Talk. Listen. Improve the article.
Good luck to you all, WhatamIdoing (talk) 04:39, 8 October 2014 (UTC)
Well spoken WhatamIdoing. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)

This is the most confusingly formatted RFC I've ever seen that wasn't immediately thrown out as gibberish, however, it doesn't look like anybody is arguing that the topic should be described as "human-like" in the lead? I'd expect to see at least one. Have the concerned parties been notified that this is ongoing? APL (talk) 06:17, 8 October 2014 (UTC)

This continuing noise is in danger of drowning out the discussion proper, so I have refactored the format as suggested above and acceded to by the OP. — Cheers, Steelpillow (Talk) 09:00, 8 October 2014 (UTC)

Threaded discussion of RFC topic

I am getting more unhappy with that phrase "human-like". What does it signify? The lead says, "This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence," which to me presupposes human-like consciousness. OTOH here it is defined as: "The ability for machines to understand what they learn in one domain in such a way that they can apply that learning in any other domain." This makes no assumption of consciousness, it merely defines human-like behaviour. One of the citations in the article says, "Strong AI is defined ... by Russell & Norvig (2003, p. 947): "The assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the 'weak AI' hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the 'strong AI' hypothesis." Besides begging the question as to what "simulating thinking" might be, this appears to raise the question as to whether strong vs weak is really the same distinction as human-like vs nonhuman. Like everybody else, AI researchers between tham have all kinds of ideas about the nature of consciousness. I'll bet that many think that "simulating thinking" is an oxymoron, while as many others see it as a crucial issue. In other words, there is a profound difference between the scientific study and creation of AI behaviour vs. the philosophical issue as to its inner experience - a distinction long acknowleged in the study of the human mind. Which of these aspects does the phrase "human-like" refer to? One's view of oneself in this matter will strongly inform one's view of AI in like manner. I would suggest that it can refer to either according to one's personal beliefs, and rational debate can only allow the various camps to beg to differ. The phrase is therefore best either avoided in the lead or at least set in an agreed context. Sorry to have rambled on so. — Cheers, Steelpillow (Talk) 18:21, 6 October 2014 (UTC)

This is a good question, which hasn't been answered directly before. In my view, "human-like" can mean several different things:
  1. AI should use the same algorithms that people do. For example, means-ends analysis is an algorithm that was based on psychological experiments by Newell and Simon, where they studied how people solved puzzles. AI founder John McCarthy (computer scientist) argued that this was a very limiting approach.
  2. AI should study uniquely human behaviors; i.e. try to pass the Turing Test. See Turing Test#Weaknesses of the test to see the arguments against this idea. Please read the section on AI research -- most AI researchers don't agree that the Turing Test is a good measure of AI's progress.
  3. AI should be based on neurology; i.e., we should simulate the brain. Several people in artificial general intelligence think this is the best way forward, but the vast majority of successful AI applications have absolutely no relationship to neurology.
  4. AI should focus on artificial general intelligence (by the way, this is what Ray Kurzweil and other popular sources call "strong AI"). It's not enough write a program that solves only one particular problem intelligently; it has to be prepared to solve any problem, just as humans brains are prepared to solve any problem. The vast majority of AI research is about solving particular problems. I think everyone would agree that general intelligence is a long term goal, but it also true that many would not agree that "general intelligence" is necessarily "human-like".
  5. AI should attempt to give a machine subjective conscious experience (consciousness or sentience). (This is what John Searle and most academic sources call "strong AI"). Even if it was clear how this could be done, it is an open question as to whether consciousness is necessary or sufficient for intelligent problem-solving.
The question at issue is this: do any of these senses of "human like" represent the majority of mainstream AI research? Or do each of these represent the goals or methodology of a small minority of researchers or commentators? ---- CharlesGillingham (talk) 08:48, 7 October 2014 (UTC)
@Felix: What do you mean by "human-like"? Is it any of the senses above? Is there are another way to construe it I have overlooked? I'm am still unclear as to what you mean by "human-like" and why you insist on including it in the lede. ---- CharlesGillingham (talk) 09:23, 7 October 2014 (UTC)
One other meaning occurs to me now I have slept on it. The phrase "human-like" could be used as shorthand for "'human-like', whatever that means", i.e. it could be denoting a deliberately fuzzy notion that AI must clarify if it is to succeed. Mary Shelley galvanized Frankenstein's monster with electricity - animal magnetism - to achieve this end in what was essentially a philosophical essay on what it means to be human. Biologists soon learned that twitching the leg of a dead frog was not what they meant by life. People once wondered whether a sufficiently complex automaton could have "human-like" intelligence. Alan Turing suggested a test to apply but nowadays we don't think that is quite what we mean. In the days of my youth, playing chess was held up as an example of more human-like thinking - until the trick was pulled and then everybody said, "oh no, now we know how it's done that's not what I meant". Something like pulling inferences from fuzzy data took its place, only to be tossed in the "not what I meant" bucket by Google and its ilk. You get the idea. We won't know what "human-like" means until we have stopped saying "that's not what I meant" and started saying, "Yes, that's what I mean, you've done it." In this light we can understand that some AI researchers are desperate to make that clarification, while others believe it to be a secondary issue at best and prefer to focus on "intelligence" in its own right. — Cheers, Steelpillow (Talk) 09:28, 8 October 2014 (UTC)
Categories: