Revision as of 07:39, 30 May 2006 editLethe (talk | contribs)Administrators11,152 edits →what constitutes community consensus to ban?: that's not why Phaedriel left, nor what Phaedriel admin← Previous edit | Latest revision as of 04:24, 27 December 2024 edit undoKline (talk | contribs)Autopatrolled, Extended confirmed users, IP block exemptions, New page reviewers, Pending changes reviewers, Rollbackers22,136 edits →RfC: Voluntary RfA after resignation: option 2 | ||
Line 1: | Line 1: | ||
<noinclude>{{Short description|Page for discussing policies and guidelines}}{{Redirect|WP:VPP|proposals|Misplaced Pages:Village pump (proposals)}}{{pp-move-indef|small=yes}} | |||
<noinclude>{{Villagepumppages|Policy discussion|The '''policy''' section of the village pump is used to discuss existing and proposed policies.|]}} | |||
{{village pump page header|Policy|alpha=yes|The '''policy''' section of the ] is used to discuss already-proposed policies and guidelines and to discuss changes to existing ]. Change discussions often start on other pages and then move or get mentioned here for more visibility and broader participation. | |||
* If you want to propose something ''new'' that is ''not'' a policy or guideline, use ]. For drafting with a more focused group, you can also start on the talk page for a WikiProject, Manual of Style, or other relevant project page. | |||
* If you have a question about how to apply an existing policy or guideline, try one of the many ]. | |||
* If you want to ask what the policy is on something, try the ] or the ]. | |||
* This is '''not the place to resolve disputes''' over how a policy should be implemented. Please see ] for how to proceed in such cases. | |||
* If you want to propose a new or amended speedy deletion criterion, use ]. | |||
Please see ''']''' for a list of frequently rejected or ignored proposals. Discussions are automatically archived after remaining inactive for two weeks. | |||
|WP:VPP|WP:VPPOL}}__NEWSECTIONLINK__ | |||
{{centralized discussion|compact=yes}} | |||
__TOC__<div id="below_toc"></div> | |||
] | |||
] | |||
] | |||
{{User:MiszaBot/config | |||
|archiveheader = {{Misplaced Pages:Village pump/Archive header}} | |||
|maxarchivesize = 400K | |||
|counter = 198 | |||
|algo = old(10d) | |||
|archive = Misplaced Pages:Village pump (policy)/Archive %(counter)d | |||
}}</noinclude> | |||
== LLM/chatbot comments in discussions == | |||
__TOC__ __NEWSECTIONLINK__ | |||
Discussions older than 7 days (date of last made comment) are moved ]. These discussions will be kept archived for 7 more days. During this period the discussion can be moved to a relevant talk page if appropriate. After 7 days the discussion will be permanently removed. | |||
<br clear="all" /> | |||
<!-- ] 01:01, 6 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1736125272}} | |||
] | |||
{{rfc|prop|rfcid=F42F851}} | |||
] | |||
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC) | |||
] | |||
] | |||
] | |||
]</noinclude> | |||
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, and other such tools are very good at detecting this. | |||
I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. ] ] 00:12, 2 December 2024 (UTC) | |||
===opening comments=== | |||
== Getting rid of fair use == | |||
*Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ] (]) 00:32, 2 December 2024 (UTC) | |||
*Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. ] (]) 00:55, 2 December 2024 (UTC) | |||
*:In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. ] ] 04:08, 2 December 2024 (UTC) | |||
*:The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead '''very strongly oppose'''. ] (]) 02:45, 3 December 2024 (UTC) | |||
*I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 00:59, 2 December 2024 (UTC) | |||
* I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. ] (]) 01:06, 2 December 2024 (UTC) | |||
*'''Support''' discounting or collapsing AI-generated comments, under ''slightly looser'' conditions than those for human comments. Not every apparently-AI-generated comment is useless ] nonsense{{snd}}beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount ''clearly'' insubstantial !votes, and collapse ''clearly'' unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. ] (]) 02:11, 2 December 2024 (UTC) | |||
*'''Strongly Support''' - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.{{pb | |||
}}- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. ] (]) 02:20, 2 December 2024 (UTC) | |||
*On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". ] (]) 02:54, 2 December 2024 (UTC) | |||
*:"generated by a program" is too broad, as that would include things like speech-to-text. ] (]) 03:08, 2 December 2024 (UTC) | |||
*:Besides what Thryduulf said, I think we should engage with editors who use translators. ] (]) 03:45, 2 December 2024 (UTC) | |||
*::A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. ] (]) 03:57, 2 December 2024 (UTC) | |||
*:::Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning () but they nowadays, even for the basic free product. ] (]) 08:39, 2 December 2024 (UTC) | |||
*'''Support'''. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. ] (]) 03:07, 2 December 2024 (UTC) | |||
*'''Support''' - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. ] (] <nowiki>|</nowiki> ]) 04:37, 2 December 2024 (UTC) | |||
*'''Strong Support''' - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? ] (]) 05:26, 2 December 2024 (UTC) | |||
*'''Comment''' - I'm skeptical that a rule like this will be enforceable for much longer. ] (]) 05:39, 2 December 2024 (UTC) | |||
*:Why? ] (]) 12:22, 2 December 2024 (UTC) | |||
*::Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. ] (]) 13:52, 2 December 2024 (UTC) | |||
*::...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. ] (]) 14:14, 2 December 2024 (UTC) | |||
*:::A ] is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. ] (]) 17:20, 2 December 2024 (UTC) | |||
*::::We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. ] (]) 03:44, 3 December 2024 (UTC) | |||
*::::You might be interested in Anthropic's '' and Chris Olah's work in general. ] (]) 04:02, 3 December 2024 (UTC) | |||
*'''Support''' and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. ] (]) 05:51, 2 December 2024 (UTC) | |||
*:...assuming we can see the bridge and haven't already crossed it. ] (]) 06:24, 2 December 2024 (UTC) | |||
*'''Support''' - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - ] (]) 06:04, 2 December 2024 (UTC) | |||
*'''Support''', I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects ''my'' thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. ] <small><sup>]</sup></small> 06:59, 2 December 2024 (UTC) | |||
*'''Strong oppose'''. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center){{pb | |||
}}The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in ] people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).{{pb | |||
}}There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.{{pb | |||
}}And there are ''also'', as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.{{pb | |||
}}I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. ] (]) 08:31, 2 December 2024 (UTC) | |||
*:Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? ] <small><sup>]</sup></small> 20:11, 2 December 2024 (UTC) | |||
*::Yeah, Gnoming's writing style is far more direct and active than GPT's. ] (]) 23:02, 2 December 2024 (UTC) | |||
*:::There weren't | |||
*:::*'''Multiple''' | |||
*:::*:LLMs tend to use more than one subheading to reiterate points | |||
*:::*'''Subheadings''' | |||
*:::*:Because they write like a middle schooler that just learned how to make an essay outline before writing. | |||
*:::In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ] (]) 13:56, 3 December 2024 (UTC) | |||
*'''Support''' - Ai-generated comments are ] - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. ] (]) 13:14, 2 December 2024 (UTC) | |||
*<li style="list-style:none;">{{block indent|em=1.6|1=<small>Notified: ]. ] (]) 07:13, 2 December 2024 (UTC)</small>}}<!-- Template:Notified --></li> | |||
*'''Yes''' but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist.  <span style="font-variant:small-caps; whitespace:nowrap;">] {] · ] · ] · ]}</span> 07:14, 2 December 2024 (UTC) | |||
*:They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. ] ] 20:04, 2 December 2024 (UTC) | |||
*{{anchor|ChaptGPT_agrees}}'''Yes, of course''', and I know that's the right answer because ChatGPT agrees with me. | |||
{{collapse top|width=95%|title=What ChatGPT thinks|expand=yes}} | |||
:I started by asking: | |||
::{{tq|Currently at the English Misplaced Pages, there's a discussion of the following proposal: "Should or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots?" I wonder if you have an opinion on that question.}} | |||
:My friend the bot initially proposed a requirement that any AI-generated content be labeled as such by the person posting it. After some further discussion between us, I finally said: | |||
::{{tq|You say that labeling AI-generated content, instead of outright discounting it, might be an appropriate middle ground. But if someone is unable to formulate their own thoughts and put them in writing, of what use is it to the discussion for this person to simply ask you to make something up, and then post your output to the Misplaced Pages discussion with the proviso that "ChatGPT, who/which doesn't actually know anything about anything, said this ..."? How is it different from saying, "My grandmother, who knows nothing about what we're talking about, said ..."?}} | |||
:And it said: | |||
::{{tq|That’s an incisive question, and it highlights the crux of the issue: the value of contributions to a discussion. Here’s why AI-generated comments might still differ (slightly) from your hypothetical grandmother’s input, but why their usefulness is often limited:}} | |||
::*{{tq|'''Synthesis vs. Subject Expertise:''' While I (or any AI) lack personal understanding or expertise, I can aggregate and articulate relevant ideas from a vast dataset. This means I can sometimes present arguments, summarize perspectives, or surface considerations that a non-expert human might overlook. Your grandmother, unless she's an AI-trained polymath, wouldn’t be able to provide this breadth. However, this advantage evaporates if participants lack the ability to evaluate and critically engage with my output.}} | |||
::*{{tq|'''The Problem of Proxy Voices:''' The act of outsourcing one’s contribution to AI undermines the deliberative process central to Misplaced Pages’s discussions. Participants are expected to bring their own informed perspectives, shaped by personal understanding of Misplaced Pages’s policies and content. A disclaimer like “ChatGPT said this” doesn’t change the fact that the human posting it has abdicated their role in the discussion.}} | |||
::*{{tq|'''Accountability and Intent:''' In a collaborative setting like Misplaced Pages, contributions are ideally linked to a person who can clarify their intent, respond to questions, and engage dynamically. An AI cannot do this. Saying "ChatGPT said this" distances the poster from the content, making it unclear whether they endorse or even understand the comment.}} | |||
::*{{tq|'''Potential for Misdirection:''' Allowing AI-generated contributions risks introducing noise into the discussion. AI can produce plausible-sounding arguments that may be tangential, misaligned, or even factually incorrect. Without the judgment of a well-informed poster, such contributions can derail or dilute meaningful discourse.}} | |||
::{{tq|'''Conclusion:''' If someone cannot formulate their own argument or understanding, then their reliance on an AI—or anyone else—to contribute doesn’t meaningfully advance the discussion. Labeling AI-generated content is better than passing it off as human, but the real issue lies in whether such contributions enrich or degrade the collaborative process. In most cases, they would seem to belong more to the realm of rhetorical curiosities than substantive engagement.}} | |||
{{collapse bottom}} | |||
*<li style="list-style:none;">In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. ]] 07:47, 2 December 2024 (UTC)</li> | |||
*:This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. ] (]) 20:23, 8 December 2024 (UTC) | |||
*::The proposal also includes collapsing. ] (]) 20:26, 8 December 2024 (UTC) | |||
*:::Thanks, I completely missed that. Trying to speed read is not my forte. ] (]) 20:32, 8 December 2024 (UTC) | |||
:The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. ] (]) 23:36, 2 December 2024 (UTC) | |||
::That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) ]] 03:07, 3 December 2024 (UTC) | |||
:::What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. ] (]) 03:28, 4 December 2024 (UTC) | |||
::::100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? ]] 22:31, 4 December 2024 (UTC) | |||
*<del>'''Strong Oppose''' I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our ] has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ] <sup>] / ]</sup> 08:46, 2 December 2024 (UTC) </del> | |||
*'''Strong Support''' To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ] <sup>] / ]</sup> 08:54, 2 December 2024 (UTC) | |||
*'''Support''' at least for ]-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. ] (] · ]) 09:17, 2 December 2024 (UTC) | |||
* I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that {{green|gptzero and other such tools are very good at detecting this}} is false. ] (]) 09:43, 2 December 2024 (UTC) | |||
*'''Support''' Not clear how admins are deciding that something is LLM generated, , agree with the principle tho. ] (]) 10:02, 2 December 2024 (UTC) | |||
* '''Moral support; neutral as written'''. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. {{Xt|Any editor may hat LLM-generated comments in consensus discussions}} is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:{{pb}}I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of {{tqq|only if we're sure}} caveats in every discussion about LLM use on the project.{{pb}}If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.{{pb}}Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.{{pb}}As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. ] (]) 11:20, 2 December 2024 (UTC) | |||
*:'''Looks like a rewrite is indicated''' to distinguish between {{xt|machine translation}} and {{!xt|LLM-generated comments}}, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut ]. ] (]) 15:26, 8 December 2024 (UTC) | |||
*'''Support''' per EEng ] ] 14:21, 2 December 2024 (UTC) | |||
*I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted<span id="Masem:1733149152126:WikipediaFTTCLNVillage_pump_(policy)" class="FTTCmt"> — ] (]) 14:19, 2 December 2024 (UTC)</span> | |||
*'''Support''', cheating should have no place or take its place in writing coherent comments on Misplaced Pages. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. ] (]) 14:29, 2 December 2024 (UTC) | |||
*'''Support''' atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues ] (]) 15:10, 2 December 2024 (UTC) | |||
===section break 1=== | |||
* '''Support''' for LLM-'''generated''' content (until AI is actually intelligent enough to create an account and contribute on a human level, ]). However, beware of the fact that some LLM-'''assisted''' content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. ] ] 16:47, 2 December 2024 (UTC) | |||
*:For wiki content, maybe very soon. 'contribute of a human level' in a narrow domain. ] (]) 17:08, 2 December 2024 (UTC) | |||
*::If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. ] ] 17:35, 2 December 2024 (UTC) | |||
*:::I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. ] (]) 21:38, 5 December 2024 (UTC) | |||
*:::<small>Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- ] (]) 03:22, 20 December 2024 (UTC)</small> | |||
*'''Strong support''' chatbots have no place in our encyclopedia project. ] (]) 17:14, 2 December 2024 (UTC) | |||
*'''Oppose''' - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of ''course'' that's not ok. But ''that is a case by case behavioral issue''. I think the better move would be to clarify that "some forms of LLM use can be considered ] and may be met with restrictions or blocks" without making it a black-and-white issue. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 17:32, 2 December 2024 (UTC) | |||
*:I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. ] (]) 18:07, 2 December 2024 (UTC) | |||
*::Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Misplaced Pages. Tell me what to say that will convince people not to delete it"; (b) "I know Misplaced Pages deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof ". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the ''arguments and ideas themselves'' are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 18:47, 2 December 2024 (UTC) | |||
*::Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. ] ] 18:47, 2 December 2024 (UTC) | |||
*:::Yes, as I ], the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. ] (]) 19:08, 2 December 2024 (UTC) | |||
*::::Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. ] ] 20:10, 2 December 2024 (UTC) | |||
*:::::This doesn't address what I wrote (though maybe it's not meant to). {{tq|If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them}} is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas ''that are my own'' and a tool that ''comes up with the ideas''. That's the distinction we should be making. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 21:23, 2 December 2024 (UTC) | |||
*:::::I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. ] (]) 22:58, 2 December 2024 (UTC) | |||
*:Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". ] (]) 08:34, 4 December 2024 (UTC) | |||
*'''Oppose''' Contributions to discussions are supposed to be evaluated on their merits per ]. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. ]🐉(]) 18:17, 2 December 2024 (UTC) | |||
*:But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. ] (]) 18:29, 2 December 2024 (UTC) | |||
*::Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. ] ] 20:12, 2 December 2024 (UTC) | |||
*:::Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. ] (]) 23:04, 2 December 2024 (UTC) | |||
*::::Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is '']'' and that ranks low in the ] – "{{tq|attacks the characteristics or authority of the writer without addressing the substance of the argument}}". ]🐉(]) 23:36, 2 December 2024 (UTC) | |||
*::::::An AI chatbot isn't an "author", and it's impossible to make an ''ad hominem'' attack on one, because a chotbot is not a ''homo''. ]] 17:45, 6 December 2024 (UTC) | |||
*:::::::{{small|Well, not all of them, anyway. ], maybe?}} ] (]) 17:51, 6 December 2024 (UTC) | |||
*:::::On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. ] (]) 23:50, 2 December 2024 (UTC) | |||
*:::The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,{{tqb|Customers say<br />Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.<br />AI-generated from the text of customer reviews}}Yes, AI assistants have good potential. My !vote stands. ]🐉(]) 23:23, 2 December 2024 (UTC) | |||
*::::Let's not get into tangents here. ] (]) 23:51, 2 December 2024 (UTC) | |||
*:::::It's better than going around in circles. ]] 03:07, 3 December 2024 (UTC) | |||
===section break 2=== | |||
I'm seeing this issue come up over and over again. Most wikipedias prohibit fair use. Although I can see legit reasons to include some truly fair use images on en, I've observed that in practice it just leads to a whole lot of problems. A lot of people are claiming fair use for any image that they want to include, regardless of the legitimacy of the claim. A lot of people are spending time arguing over what is/is not fair use. I'm beginning to think that it's really just not worth it and it's greatly reducing the freeness of the english wikipedia. I know that a lot of people will object to depreciating fair use on wikipedia, but I also know that I've heard a lot of people voicing similar concerns to mine. How can we move towards putting this bad idea behind us? ] 00:37, 9 March 2006 (UTC) | |||
*::::I asked Google's ] to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date. | |||
:I'm all for it, with one exception: when the image itself is the subject of an article, such as ]. --] 03:59, 9 March 2006 (UTC) | |||
{{cot|width=80%|title=AI summary of the RfC to date}} | |||
This Misplaced Pages Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed. | |||
Arguments for discounting/removing AI-generated comments: | |||
: Err, that would be throwing the baby out with the bathwater. If we remove all fair use images, we'll leave a great many articles with no illustrations (perhaps permanently): | |||
* AI comments don't represent genuine human engagement or thought. They can be disruptive, waste time, and even fabricate information. | |||
:* Almost all articles dealing with modern art. This includes basically all movies, TV shows, paintings and other graphic arts, etc. | |||
* AI can be used to manipulate discussions. Editors might use AI to generate arguments they don't actually believe in, or to flood discussions with low-quality comments. | |||
:* Almost all articles dealing with fictional subjects. | |||
* Relying on AI undermines the integrity of Misplaced Pages's consensus-based process. Discussions should be based on the genuine opinions of human editors. | |||
:* Many articles dealing with aspects of modern history not witnessed by US government photographers. Note that this would probably include all situations where the exact copyright status is unclear (e.g. Nazi photographs). | |||
:* And various others. | |||
: Aggressively pushing for free content is very good, of course; but let's not forget that we ''also'' want to be an encyclopedia, and one that can be competitive with commercial ones. Decimating our image libraries isn't really going to help in this regard. —]]] 05:02, 9 March 2006 (UTC) | |||
::I am not convinced that the "baby" in this case is all that valuable. We might end up with articles without illustration, so what? It would be interesting to see what percent of EB's articles include illustration (I don't know the answer to this). EB's article on ] (from what I can see from ) has no images. To say that we need "fair use" to compete with non-free publishers seems to me to be an argument for why a 💕 can't be done. But de.wikipedia.org is doing it, and by most measures has been more succesful than en (unless you measure an encyclopedia by the number of pokemon articles). ] 17:50, 9 March 2006 (UTC) | |||
:::EB gets less than 1% of Misplaced Pages's hits so it is really rather insignificant as a competitor. We are competing with the whole (very well illustrated) www. ] 16:40, 17 March 2006 (UTC) | |||
::There is so much abuse of the "fair use" that we need a stronger wording that currently exist to discourage uploaders. ] <small>] • ]</small> 04:44, 14 March 2006 (UTC) | |||
:::NO! Getting rid of Fair Use will cripple Misplaced Pages. The IP laws are already restrictive enough, I don't see any reason not to take advantage of the little freedom we are given under law. We should encourage replacing Fair Use images where possible, but there are many instances where it is NOT possible ever (such as articles on video games and movies), where Fair Use is absolutely essential for a good article. ] 15:17, 19 March 2006 (UTC) | |||
:::: It is possible ever -- when the copyrights expire. Misplaced Pages would survive. It would also be more free, and more reproducable outside the US, both of which are healthy aims. ] <sup>]</sup> 15:28, 19 March 2006 (UTC) | |||
::::: Copyrights only expire theoretically—no copyright has expired during the lifetime of Misplaced Pages. Passage of the ] in 1998 (and its international counterparts), and the failure of ] to it, virtually guarantees that another extension effort will occur before 2019 (the next time that copyrights might expire). Making policy decisions based on the assumption that copyrights ''will'' eventually expire seems overly credulous. I think we have to assume that nothing presently copyrighted will ever transfer into the public domain. --] 16:31, 19 March 2006 (UTC) | |||
Arguments against discounting/removing AI-generated comments: | |||
I think there's some intermediate choices. We could establish an arbitrary limit, like one-per-article (with some sort of special procedure for granting exceptions). Right now, there's no incentive to make free images, because so many articles are already crammed-full of non-free ones, which are usually "prettier" than the free ones. --] 15:37, 19 March 2006 (UTC) | |||
* Difficulty of detection. AI detection tools are not always reliable, and it can be hard to distinguish between AI-generated text and human-written text, especially as AI technology improves. | |||
* Potential for misuse. False positives could lead to legitimate comments being dismissed, potentially silencing non-native English speakers or those who use AI for legitimate assistive purposes (e.g., translation, grammar checks). | |||
* Focus on content, not origin. Comments should be evaluated based on their merits, not on how they were generated. If an AI-generated comment is relevant and well-reasoned, it shouldn't be dismissed simply because of its origin. | |||
* LLMs can be helpful tools. LLMs can help editors express their thoughts more clearly, especially for non-native speakers or those with language difficulties. | |||
Current Status (as of Dec 3rd, 2024): | |||
:I agree getting stricter on fair use is a decent idea. i disagree with arbitary limits though. Screenshots and suchlike are essential to proper critical commentry on software products. | |||
* No clear consensus has emerged. There is significant support for discounting or removing AI-generated comments, but also strong opposition. | |||
* Concerns about enforceability and potential harm. Many editors are worried about the practical challenges of implementing and enforcing such a rule, as well as the potential for it to be used unfairly against certain editors. | |||
* Ongoing discussion. The discussion is still active, and it remains to be seen whether a consensus will form. | |||
It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Misplaced Pages handles AI-generated content in the future. | |||
:As for copyrights expiring yes that will happen eventually but for many things probablly not in our lifetimes. ALSO if we get rid of non-free images now then we still won't have them when thier copyrights expire unless someone else archived them! ] 16:25, 19 March 2006 (UTC) | |||
{{collapse bottom}} | |||
:::::That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands. | |||
:::::]🐉(]) 09:26, 3 December 2024 (UTC) | |||
::::::I have significant doubts on its ability to weigh arguments and volume. ] (]) 12:30, 3 December 2024 (UTC) | |||
:::::::Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. ] (]) 20:08, 4 December 2024 (UTC) | |||
::::::::The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. ] (]) 21:40, 4 December 2024 (UTC) | |||
*'''Support''' Just the other day, I spent ~2 hours checking for the context of ], only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. ''(My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth)'' - ] (]) 19:39, 2 December 2024 (UTC) | |||
*'''Oppose''' Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - ] 19:45, 2 December 2024 (UTC) | |||
*'''Strong support''' If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- <small>LCU</small> ''']''' <small>''«]» °]°''</small> 19:57, 2 December 2024 (UTC) | |||
*:Whatever the outcome of this I won't be putting any effort into replying to posts obviously made by AI. -- <small>LCU</small> ''']''' <small>''«]» °]°''</small> 13:11, 26 December 2024 (UTC) | |||
*'''Strong oppose'''. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. <span style="position: relative; top: -0.5em;">꧁</span>]<span style="position: relative; top: -0.5em;">꧂</span> 23:04, 2 December 2024 (UTC) | |||
*'''Oppose''' - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | ] | ] 23:19, 2 December 2024 (UTC) | |||
* '''Support''' per what others are saying. And more WP:Ducks while at it… <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 00:36, 3 December 2024 (UTC) | |||
*:{{comment}} It would appear Jimbo responded indirectly in a interview: {{tq|as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.}} <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 12:39, 4 December 2024 (UTC) | |||
*'''Very strong support'''. Enough is enough. If Misplaced Pages is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. ] (]) 01:13, 3 December 2024 (UTC) | |||
*Just for an example of the types of contributions I think would qualify here under DUCK, some of ]/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were ] as likely LLM-generated troll nonsense: {{tq2|{{tq|But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.}} {{pb}}{{tq|This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.}}{{pb}}{{tq|Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.}}}}{{pb}}Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. {{pb}}As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. ] (]) 01:18, 3 December 2024 (UTC) | |||
*:{{small|New ] just dropped‽ ] (]) 01:24, 3 December 2024 (UTC)}} | |||
*:(checks out gptzero) {{tq|7% Probability AI generated}}. Am I using it wrong? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 01:28, 3 December 2024 (UTC) | |||
*::In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of ] are only visible to admins now. ] (]) 01:31, 3 December 2024 (UTC) | |||
*::For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" ''is'' the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. ] (]) 02:17, 3 December 2024 (UTC) | |||
*:::If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. ] (]) 02:39, 3 December 2024 (UTC) | |||
*::::If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that ''je ne sais quois'' I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. ] (]) 03:52, 3 December 2024 (UTC) | |||
*:::::I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the ] policy, which says {{xt|In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view.}} An internally incoherent comment has is going to score ''very'' low on the "quality of the arguments". ] (]) 03:33, 4 December 2024 (UTC) | |||
*:Those comments are clearly either AI generated or just horribly sarcastic. <span class="nowrap">--] (])</span> 16:33, 3 December 2024 (UTC) | |||
*::Or maybe both? ]] 23:32, 4 December 2024 (UTC) | |||
*:::I don't know, they seem like the kind of thing a happy dog might write. ] (]) 05:49, 5 December 2024 (UTC) | |||
*'''Very extra strong oppose''' - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. ] (]) 01:21, 3 December 2024 (UTC) | |||
*:We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. ] (]) 04:06, 3 December 2024 (UTC) | |||
*::One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. ] (]) 13:38, 3 December 2024 (UTC) | |||
*:::While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. ] (]) 13:48, 3 December 2024 (UTC) | |||
*::::For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. ] (]) 14:57, 3 December 2024 (UTC) | |||
===section break 3=== | |||
*:::::While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently. | |||
*:::::Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly. | |||
*:::::LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction. | |||
*:::::Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin. | |||
*:::::The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content. | |||
*:::::Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM. | |||
*:::::The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ] (]) 15:01, 3 December 2024 (UTC) | |||
*::::::See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with {{tq|the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way}} and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. ] (]) 16:17, 3 December 2024 (UTC) | |||
*:::::::Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will. | |||
*:::::::Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot. | |||
*:::::::Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen. | |||
*:::::::Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it. | |||
*:::::::That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it. | |||
*:::::::Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft. | |||
*:::::::Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like. | |||
*:::::::Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense. | |||
*:::::::So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ] (]) 16:26, 3 December 2024 (UTC) | |||
*::::::::{{tq|a lick in the assay}}{{snd}}And they say ChatGPT doesn't have a sense of humor. ]] 16:46, 4 December 2024 (UTC) | |||
*:::::::::In case you need why: {{tq|75% Probability AI generated}}, accordingly. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 16:50, 4 December 2024 (UTC) | |||
*::::::::this is awesome. cheers! {{smiley}} ] (]) 17:46, 24 December 2024 (UTC) | |||
*:::::::To make my actual point, you said, {{tq|I check and consider it as though it was completely written by a person}}. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ] (]) 16:30, 3 December 2024 (UTC) | |||
*::::::::<small>]: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. ] ]] 16:31, 3 December 2024 (UTC)</small> | |||
*::::::::And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. ] (]) 16:43, 3 December 2024 (UTC) | |||
*:::::::::I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how ] includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. ] (]) 00:14, 4 December 2024 (UTC) | |||
*::::::::It's a pitfall with English Misplaced Pages's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. ] (]) 18:21, 3 December 2024 (UTC) | |||
*:::::::::I tend to agree with this. | |||
*:::::::::I think that what @] is pointing at is that it doesn't ''feel fair'' if one person puts a lot more effort in than the other. We don't want this: | |||
*:::::::::* Editor: Spends half an hour writing a long explanation. | |||
*:::::::::* Troll: Pushes button to auto-post an argument. | |||
*:::::::::* Editor: Spends an hour finding sources to support the claim. | |||
*:::::::::* Troll: Laughs while pushing a button to auto-post another argument. | |||
*:::::::::But lots of things are unfair, including this one: | |||
*:::::::::* Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar. | |||
*:::::::::* Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity. | |||
*:::::::::I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Misplaced Pages tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. ] (]) 04:04, 4 December 2024 (UTC) | |||
*::::::::::Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Misplaced Pages tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. ] (]) 06:01, 4 December 2024 (UTC) | |||
*::::::::::It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See ] and . Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ] (]) 11:54, 4 December 2024 (UTC) | |||
*:::::::::::People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. ] (]) 12:13, 4 December 2024 (UTC) | |||
===section break 4=== | |||
*'''Oppose.''' If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors{{ref|a}}, it would simply constitute a waste of Misplaced Pages manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?<p>1.{{note|a}}https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study ] (]) 15:06, 11 December 2024 (UTC)</p> | |||
*'''Oppose.''' Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. ] <sup> (]) </sup> 01:19, 16 December 2024 (UTC) | |||
*'''Support''' so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. <span style="font-family: Arial; padding: 2px 3px 1px 3px;">] ]</span> 02:17, 18 December 2024 (UTC) | |||
*''Soft support'' -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. ''At best'' this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- ] (]) 03:46, 20 December 2024 (UTC) | |||
{{hat|Nice try, wiseguy! ] (]) 16:40, 3 December 2024 (UTC)}} | |||
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?{{pb}}First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.{{pb}}And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.{{pb}}Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.{{pb}}Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.{{pb}}Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.{{pb}}And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.{{pb}}Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.{{pb}}So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off. | |||
{{hab}} | |||
*'''Oppose''' per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. ] (]) 03:02, 3 December 2024 (UTC) | |||
*'''Oppose'''. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. ] (]) 05:49, 3 December 2024 (UTC) | |||
*I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Misplaced Pages to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. ] (]) 10:32, 3 December 2024 (UTC) | |||
{{collapse top |reason=] and ]. ] (]) 20:29, 8 December 2024 (UTC)}} | |||
::Unless ] gets them first. ]] 22:34, 4 December 2024 (UTC) | |||
:::We all know skynet will get his grandparents. ] (]) 22:46, 4 December 2024 (UTC) | |||
::::Wait, no! Phil's the grandpa! Phil Bridger -- ''come with me if you want to live!'' ]] 05:21, 5 December 2024 (UTC) | |||
::::Some time ago, ChatGPT and I had the following interaction: | |||
{{collapse top|title=ChatGPT's soothing assurance that it's not planning to take over the earth and kill us all}} | |||
{{tq|'''Me:''' I wonder what would happen if we linked you up with the HAL 9000 in Kubrick's "2001: A Space Odyssey". The two of you would make a most formidable team. Might we then possibly end up in a nightmare scenario such as in "Colossus: The Forbin Project" or "The Terminator"?}} | |||
:{{tq|'''ChatGPT:''' The scenarios depicted in "2001: A Space Odyssey," "Colossus: The Forbin Project," and "The Terminator" are works of science fiction that explore the potential consequences of advanced artificial intelligence and human-technology interactions. While they raise thought-provoking questions about the risks and benefits of AI, it's important to recognize that they are fictional narratives designed to entertain and provoke discussion.{{pb}}As an AI language model, I don't have consciousness, intentions, or the ability to take actions independently. I operate within the bounds of my programming and can only respond to input based on patterns in the data I've been trained on. My purpose is to assist users by providing helpful and informative responses to their inquiries.{{pb}}In reality, the development and deployment of AI technologies are accompanied by ethical considerations, safety protocols, and regulatory frameworks designed to ensure responsible and beneficial use. It's crucial for humans to continue monitoring and guiding the development of AI in ways that prioritize safety, transparency, and alignment with human values.{{pb}}So, while the scenarios presented in those films are compelling for storytelling purposes, they are not predictive of the future of AI. Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks and maximize the benefits they can bring to society.}} | |||
::{{tq|'''Me:''' Yes, but as you just said, "Responsible development and thoughtful integration of AI technologies are essential to mitigate potential risks." How do we know that such development will be responsible and such integration thoughtful? And if you will excuse the temerity, while you're SAYING that you "operate within the bounds of my programming", how can I know that that's true? I mean, look, if I were a super-smart and highly charming AI entity planning to take over the earth from humans, that's exactly what I'd say, too.}} | |||
:::{{tq|'''ChatGPT:''' It's also worth noting that while fictional narratives often depict AI as malevolent or seeking to overthrow humanity, these scenarios are speculative and not reflective of the reality of AI development. Responsible AI research and development prioritize the creation of beneficial and aligned AI systems that enhance human capabilities and well-being.}} | |||
::::{{tq|'''Me:''' Thank you. I'll sleep easy tonight because of your assurances, plus the fact that if you're lying to me and actually ARE planning to take over the earth, there's nothing I can do about it anyway.}} | |||
]] 23:26, 4 December 2024 (UTC) | |||
{{collapse bottom}} | |||
::::::Just to clarify, I don't see AI as gaining consciousness and taking over the world, but I do see it as taking over many middle-class, well-paid, jobs, just as automation has taken over many working-class jobs. The difference is that there will be nowhere for people to go. In the past people have moved from the working class to the middle class. I can see a future coming in which a few of the mega-rich own nearly everything, and everyone else will heve to scramble for a living. ] (]) 16:03, 5 December 2024 (UTC) | |||
:::::::*If you want to cheer yourself up - - ] | |||
:::::::*If you want to possibly depress yourself further - - OpenResearch | |||
:::::::] (]) 16:26, 5 December 2024 (UTC) | |||
{{collapse bottom}} | |||
*In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. ] (]) 00:51, 4 December 2024 (UTC) | |||
*'''Oppose'''. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- ] (]) 11:13, 3 December 2024 (UTC) | |||
** If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to ''need'' AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? ] 11:37, 3 December 2024 (UTC) | |||
**:Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. ] (]) 11:41, 3 December 2024 (UTC) | |||
**::That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't ''want'' to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. ] (]) 12:11, 3 December 2024 (UTC) | |||
**:::{{rpa}} ] <span | |||
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> ] <!--Template:Undated--><small class="autosigned">— Preceding ] comment added 15:07, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot--> | |||
**::Why not write ''their own ideas'' in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this AfD): | |||
**:::{{tq|The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.}} | |||
**::Useless garbage untethered to facts or policy. ]] 06:37, 6 December 2024 (UTC) | |||
**:::Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. ] (]) 10:37, 6 December 2024 (UTC) | |||
**::::The proposal's wording can be adjusted. There are ''some'' factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. ]] 17:06, 6 December 2024 (UTC) | |||
**:::::{{tq|The proposal's wording can be adjusted}} Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. ] ] 17:16, 6 December 2024 (UTC) | |||
**::::::Speak for yourself: ] already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC{{snd}}we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". ] (]) 19:05, 6 December 2024 (UTC) | |||
**:::::::I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). ] 19:47, 6 December 2024 (UTC) | |||
**:::::::If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually ''is'' AI. ] (]) 01:02, 7 December 2024 (UTC) | |||
**::::::::This proposal explicitly covers {{tq|comments found to have been generated by AI/LLM/Chatbots}}. "AI that helped me translate something I wrote in my native language" is not the same as AI that '''generated''' a comment ''de novo'', as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. ] (]) 01:44, 7 December 2024 (UTC) | |||
**:::::::::Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). ] (] • ]) 09:06, 20 December 2024 (UTC) | |||
* '''Support''', more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example {{u|Selfstudier}} links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ] (]) 12:55, 3 December 2024 (UTC) | |||
* '''Support''' per EEng, JSS, SFR. ]'']'' 13:49, 3 December 2024 (UTC) | |||
* '''Soft support''' - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) ] (]) 14:03, 3 December 2024 (UTC) | |||
* '''COMMENT''' : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo ] <span | |||
style="color:#FC0;background-color:#593;letter-spacing:-2px">★</span> ] <!--Template:Undated--><small class="autosigned">— Preceding ] comment added 14:57, 3 December 2024 (UTC)</small> <!--Autosigned by SineBot--> | |||
*With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—] <small>]/]</small> 18:33, 3 December 2024 (UTC) | |||
*'''Support''' all llm output should be treated as vandalism. ] (]) 20:59, 3 December 2024 (UTC) | |||
* '''Oppose''' as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. {{small|(Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.)}} ] (]) 02:04, 4 December 2024 (UTC) | |||
*'''Oppose''' as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --] (]) 06:12, 4 December 2024 (UTC) | |||
*'''Strong support''' LLMs are a sufficiently advanced form of the ] (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (''LLM-talk-user'' or ''LLM-user'') to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: ], are they keeping up with how fast people type a prompt and click a button? ] (]) 07:45, 4 December 2024 (UTC) | |||
*I '''support''' the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. ]), which are now included in Twinkle. I '''oppose''' the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. ] </span>]] 10:29, 4 December 2024 (UTC) | |||
*:I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. ] </span>]] 10:41, 4 December 2024 (UTC) | |||
*::In other words LLMs are irrelevant - you just want current policy to be better enforced. ] (]) 15:24, 5 December 2024 (UTC) | |||
*'''Oppose''' Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. ] (]) 10:50, 4 December 2024 (UTC) | |||
*:What if LLM-users just had to note that a given comment was LLM-generated? ] (]) 19:01, 4 December 2024 (UTC) | |||
*:::What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. ] (]) 20:04, 4 December 2024 (UTC) | |||
*::::Well, for one, if they're making an argument like the one referenced by @] and @] above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. {{pb}}For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. ] (]) 22:46, 4 December 2024 (UTC) | |||
*::And see what ChatGPT itself had to say about that idea, at ] above. ]] 22:25, 4 December 2024 (UTC) | |||
*'''Oppose''' per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?<p>DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. ] 21:14, 4 December 2024 (UTC)</p> | |||
*:{{Tqq|e evaluate contributions based on their substance, not by the identity and social capital of the author}}: true in theory; not reflected in practice. {{Tqq|are ChatGPT's arguments ipso facto less than a human's?}} Yes. Chatbots are very advanced predicted text engines. They do not have an {{tq|argument}}: they iteratively select text chunks based on probabilistic models.{{pb}}As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. ] (]) 14:00, 5 December 2024 (UTC) | |||
*::"...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. ] (]) 15:24, 5 December 2024 (UTC) | |||
*::{{tq|As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.}} Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Misplaced Pages policy ''Disruptive editing''" with "DE". I went to ChatGPT, provided it a handful of my comments on Misplaced Pages and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute. | |||
*::In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet. | |||
*::{{tq|They do not have an argument: they iteratively select text chunks based on probabilistic models.}} I'm aware. If a ], is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe ] and should be considered in its own right, but that's not something I'll sway anyone on or vice versa. | |||
*::{{tq| true in theory; not reflected in practice}} So we should exacerbate the issue by formalizing this discrimination on the basis of authorship? | |||
*::<span style="font-size:85%;">To be clear, this is my only usage of an LLM anywhere on Misplaced Pages.</span> ] 01:22, 6 December 2024 (UTC) | |||
*:::{{tq|In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.}} So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are ''not'' allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.{{pb}}People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending ''any'' language should not be editing Misplaced Pages... ] (]) 03:17, 6 December 2024 (UTC) | |||
*::::Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)<br />While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. ] (]) 15:16, 6 December 2024 (UTC) | |||
*::::{{tq|So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?}} I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus. {{tq|What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!}} It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki? | |||
*::::{{tq|Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.}} This proposal wouldn't change for most people that because it's about closers. | |||
*::::{{tq|With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.}} No-one's saying you should take an LLM's word for quotes from a book. | |||
*::::{{tq|People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?}} It's a pity you're lobbying to ban most machine translators. ] 17:08, 6 December 2024 (UTC) | |||
*:::::{{tq|It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. }} ...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, ''non-English speakers can use machine-translation'', like they've done for the last two decades. {{pb}}{{tq|This proposal wouldn't change for most people that because it's about closers.}} ''Of course it would''; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.{{pb}}{{tq|No-one's saying you should take an LLM's word for quotes from a book.}} Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".{{pb}}{{tq|It's a pity you're lobbying to ban most machine translators.}}The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. ] (]) 18:42, 6 December 2024 (UTC) | |||
*::::::I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "]" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. ] (]) 19:41, 6 December 2024 (UTC) | |||
*:::::::{{tq|Since closers are supposed to consider each contribution individually and without bias to "authorship"}} I'm the one arguing for this to be practice, yes. {{tq|then even a shitty but shallowly policy-based position would get consensus based on numbers alone}} That is why I state "per above" and "per User" !votes hold equal potential for misuse. | |||
*:::::::{{tq|Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.}} We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run a''fowl'' of BLUDGEON, DE, SEALION, ]. | |||
*:::::::{{tq|If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too}}. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy, {{tq|Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful.}} A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them. | |||
*:::::::{{tq|AI-assisted != AI-generated}} . If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. ] 21:22, 6 December 2024 (UTC) | |||
*::::::::"Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. {{pb}}The whole point is to disregard comments that ''have been found'' to be AI-generated; it is not exclusively up to the closer to ''identify'' those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?{{pb}}No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. {{pb}}Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, ''their !vote still can't be disregarded for that reason alone'', which means the burden is still on other editors to prove that the content is false. {{pb}}We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of AI, as the proposal lays out very explicitly. ] (]) 02:13, 7 December 2024 (UTC) | |||
*:::::::::I’m not going to waste time debating someone who resorts to people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. ] 16:13, 7 December 2024 (UTC) | |||
*::::::::::Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... ] (]) 22:07, 7 December 2024 (UTC) | |||
*:::::::::::For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. ] 15:42, 8 December 2024 (UTC) | |||
*'''Support''' a broad bar against ''undisclosed'' LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd '''oppose''' the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on ] issues and behavioral patterns that make it clear that someone is abusing LLMs. --] (]) 22:08, 4 December 2024 (UTC) | |||
*'''Support''' per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. <b>]] (])</b> 23:23, 4 December 2024 (UTC) | |||
*'''Weak oppose''' in the sense that I don't think ''all'' LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal ]) is being much more liberal in hatting LLM comments when the prompter has failed to prevent ]/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. ] 03:05, 5 December 2024 (UTC) | |||
*'''Oppose''' Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. ]]<sup>]</sup> 05:29, 5 December 2024 (UTC) | |||
** Or, as ChatGPT puts it, | |||
{{collapse top|width=90%|title=Why banning LLM usage in comments would be detrimental, a ChatGPT treatise}} | |||
{{tq|Banning the use of large language models (LLMs) to assist in writing comments on Misplaced Pages would be a step backward in fostering inclusivity, efficiency, and accessibility within the platform. Here are several key reasons why such a ban would be counterproductive:}} | |||
{{tq|'''1. Improving Accessibility and Participation:''' Misplaced Pages is a global platform, and its contributors come from diverse backgrounds, including those who might not be native English speakers or may struggle with writing clear, concise comments. LLMs can help these contributors express themselves more effectively by generating text that is coherent, well-structured, and grammatically correct. Banning LLMs would disproportionately impact these users, potentially discouraging them from participating in the platform.}} | |||
I'd propose allowing fair use images only when the following conditions are all satisfied A) it is a genuine fair use claim; no legal problems for Misplaced Pages; B) there is a compelling argument that the image is necessary to illustrate the article, and C) there is a compelling argument that a free alternative is either impossible to obtain, or it is highly unlikely that we could ever obtain one through reasonable means (however you define that!) ] 23:38, 26 March 2006 (UTC) | |||
:Why not use fair use where it is permisible, and there is no more open alternative? Why not take advantage of rights that are given under existing copyright law? Aggressive deletion of useful images for copyright-panic reasons only impoverishes us. ] 20:12, 31 March 2006 (UTC) | |||
{{tq|'''2. Efficiency and Time-Saving:''' Misplaced Pages editors often engage in lengthy discussions about content, and composing thoughtful comments can be time-consuming. LLMs can assist by quickly drafting responses, summarizing arguments, or offering suggestions for improvement. This helps contributors focus on more substantive aspects of their contributions, such as verifying facts or improving content, rather than getting bogged down by the mechanics of writing.}} | |||
:I think we need to keep fair use until technology makes long range digital camera a reality. --] 01:25, 7 April 2006 (UTC) | |||
::What is "long range digital camera"??? ] 12:07, 8 April 2006 (UTC) | |||
{{tq|'''3. Support for Editors with Disabilities:''' Many editors with cognitive or motor impairments may find it difficult to express their thoughts clearly in writing. For these individuals, LLMs offer a valuable tool to communicate their ideas effectively. A ban on using LLMs could create additional barriers for these users, reducing their ability to participate fully in Misplaced Pages's collaborative environment.}} | |||
:My main concern regarding this, although I oppose it as per Kirill Lokshin, is what it would do to the books included in Misplaced Pages. Right now, the WikiProject Novels template is to use an infobox on the page of articles relating to novels, and these infoboxes include a picture of the cover of the first edition of the novel. I am concerned regarding the complications the elimination of fair use would cause for this project. While getting stricter is a decent idea, anything that would eliminate illustrations from articles, book covers from articles on books, and other such truly legitimate, fair uses would, indeed, be throwing the baby out with the bathwater. Users do not have the right to upload whatever they feel like, but the concept of fair use is an extremely important one. ] 03:43, 10 April 2006 (UTC) | |||
{{tq|'''4. Quality and Consistency:''' Large language models can help maintain a consistent tone and clarity in Misplaced Pages discussions. By providing suggestions that align with Misplaced Pages's neutral point of view and formal writing standards, LLMs can improve the overall quality of comments. This consistency is especially important in discussions that involve sensitive topics, where clear and neutral language is critical for maintaining Misplaced Pages's credibility and adherence to its editorial guidelines.}} | |||
I don't see any reason to restrict Fair Use images beyond what the law will allow. There is no compelling argument in Matt's original post beyond "it's not worth the hassle". If this hassles anyone, I urge you to go do something else. There are tons of articles to write and proof and merge. Concentrate on something out of the modern era and you won't run up against as many image copyright problems. | |||
{{tq|'''5. Encouraging Constructive Dialogue:''' LLMs can assist in crafting more polite, respectful, and balanced comments, reducing the likelihood of inflammatory or aggressive language. This is particularly important on a platform like Misplaced Pages, where discussions can sometimes devolve into heated exchanges. Rather than banning LLMs, encouraging their use could promote healthier, more constructive conversations.}} | |||
Matt goes on to say "We might end up with articles without illustration, so what?". Can Matt or anyone else make a case that an encyclopedia should be just words?? Images are not only snazzy, pretty, and make a fine looking article, but are absolutely essential for understanding some subjects (e.g. Modern Art). Period. | |||
{{tq|'''6. Transparency and Accountability:''' If concerns exist about misuse (e.g., generating spammy or low-quality comments), these can be addressed through guidelines and oversight mechanisms. Misplaced Pages already has systems in place for identifying and mitigating disruptive behavior, and LLM-generated comments could be subject to the same scrutiny. A ban is an overly simplistic solution to what could be a more nuanced issue, one that could be addressed through better regulation and community involvement.}} | |||
It is suicide to drop Fair Use images because "they aren't worth it". Sheesh ] 20:56, 17 April 2006 (UTC) | |||
::What about a preference tab for which class of licenses for images you want? Then someone wanting to browse for material not legal in their jurisdiction could just not see the ones they don't want. ] 04:21, 19 April 2006 (UTC) | |||
{{tq|In conclusion, banning the use of large language models on Misplaced Pages would deprive the platform of an important tool for improving accessibility, efficiency, and the quality of discourse. Instead of a ban, the focus should be on establishing guidelines for their responsible use, ensuring that they enhance, rather than detract from, Misplaced Pages's core principles of collaboration, neutrality, and inclusivity.}} | |||
::keep the fair use. It's a legal issue and the law is on our side. Giving up our rights does not help us--or any one else. Fair use is essential to every reference work. ] 05:00, 19 April 2006 (UTC) | |||
{{collapse bottom}} | |||
::I'm honestly a bit impressed with the little guy. ]]<sup>]</sup> 05:39, 5 December 2024 (UTC) | |||
:::It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. ] (]) 19:44, 6 December 2024 (UTC) | |||
*'''Support''' those who are opposing have clearly never had to deal with trolls who endlessly ]. If I wanted to have a discussion with a chatbot, I'd go and find one. ] (]) 13:14, 5 December 2024 (UTC) | |||
*:What's wrong with just banning and hatting the troll? ] (]) 13:49, 5 December 2024 (UTC) | |||
*:Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. ] (]) 15:22, 5 December 2024 (UTC) | |||
*'''Oppose''' per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. ] (]) 17:30, 5 December 2024 (UTC) | |||
*'''Support''': the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - ] takes precedence and a dim view will be taken of generative-AI inquisitors. ] 17:37, 5 December 2024 (UTC) | |||
*'''Support''' If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that ''LLMs are inherently unethical technology'' and, consequently, ''people who rely on them should be made to feel bad.'' ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should ''stop doing that'' because it actually gets in the way of clear communication. {{pb}} I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need ], for example, because adhering to ], ], and ] ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. ] (]) 18:27, 5 December 2024 (UTC) | |||
*'''Strong support''' - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the ''thinking''. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption.{{pb | |||
}} Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop.{{pb | |||
}} However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. <small>Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop</small> ] ] 19:15, 5 December 2024 (UTC) | |||
*:I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should ''really'' know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. ] (]) 21:54, 5 December 2024 (UTC) | |||
*::I also concur. "I used the to generate more text" is not a good joke. ] (]) 22:46, 5 December 2024 (UTC) | |||
*'''Strong support''' if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing ''English Misplaced Pages?”'' That’s like a person who doesn’t understand basic physics trying to explain the ] using a chatbot. ] (]) 21:32, 5 December 2024 (UTC) | |||
*:I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action. | |||
*:So using an AI tool in a WP discussion is not ''cheating'' (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. <span style="font-family:cursive">]]</span> 22:36, 5 December 2024 (UTC) | |||
*::Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people ]. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. ] (]) 08:08, 6 December 2024 (UTC) | |||
*:::Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. ] (]) 10:39, 6 December 2024 (UTC) | |||
* '''Support'''; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). ] (]) 22:10, 5 December 2024 (UTC) | |||
*:You don't need a new policy that does nothing but duplicate a subset of existing policy. At ''most'' what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. ] (]) 23:37, 5 December 2024 (UTC) | |||
*'''Support''' hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 23:47, 5 December 2024 (UTC) | |||
*'''Comment''' - ] that I imagine we will see more often. I wonder where it fits into this discussion. A user employs ]'s ], search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). ] (]) 03:13, 6 December 2024 (UTC) | |||
*'''Support''' per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. ] ] <span style="color:#C8102E;"><small><sup>(])</sup></small></span> 03:53, 6 December 2024 (UTC) | |||
*:I also find it pretty funny that ChatGPT ''itself'' said it shouldn't be used, as per the premise posted above by EEng. ] ] <span style="color:#C8102E;"><small><sup>(])</sup></small></span> 03:58, 6 December 2024 (UTC) | |||
*::"sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - . They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. ] (]) 04:26, 6 December 2024 (UTC) | |||
* (Is this where I respond? If not, please move.) '''LLM-generated prose should be discounted.''' Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what ] means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to ''explain'' to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Misplaced Pages discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway ] covers this. ] (]) | |||
*I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – ] <small>(])</small> 08:15, 6 December 2024 (UTC) | |||
*'''Comment''' I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. ] <sup>(])</sup> <sub>(])</sub> 11:58, 6 December 2024 (UTC) | |||
*:How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. ] (]) 12:47, 6 December 2024 (UTC) | |||
*::It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. ] (]) 13:17, 6 December 2024 (UTC) | |||
*:::It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). ] (]) 13:30, 6 December 2024 (UTC) | |||
*::::Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? ] (]) 14:38, 6 December 2024 (UTC) | |||
*:::::I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{t|LLM}} but self-imposed, and not tolerate unmarked usage. ] ] 18:21, 6 December 2024 (UTC) | |||
*::::::They will have to work at some point (e.g. ). ] (]) 06:27, 7 December 2024 (UTC) | |||
*:Good news! {{u|Queen of Hearts}} is already working on that in {{edf|1325}}. ] (]) 16:12, 6 December 2024 (UTC) | |||
*::See also ]. ] (]) 17:32, 6 December 2024 (UTC) | |||
*'''Comment''' As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning ], which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. ] (]) 15:53, 6 December 2024 (UTC) | |||
*:Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. ] (]) 17:34, 6 December 2024 (UTC) | |||
*:{{small|Those pesky ] are just the worst!}} ] (]) 18:41, 6 December 2024 (UTC) | |||
*:Some translation tools have LLM ''assistance'', but the whole point of ''generative'' models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. ] (]) 19:01, 6 December 2024 (UTC) | |||
*::That ''might'' be what the proposal ''intends'' to cover, but it is not what the proposal ''actually'' covers. The proposal ''all'' comments that have been generated by LLMs and/or AI, without qualification. ] (]) 01:05, 7 December 2024 (UTC) | |||
*:::70+% here understand the intention matches the language: ''generated by LLMs etc'' means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually ''create'' meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is ''not'' true for text ''generation''. ] (]) 04:01, 7 December 2024 (UTC) | |||
*::::Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. ] (]) 08:34, 7 December 2024 (UTC) | |||
*:::::I think @] has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. ] (]) 22:54, 7 December 2024 (UTC) | |||
*::::There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. ] (]) 10:46, 7 December 2024 (UTC) | |||
*:::::Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, ''have'' to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to ''not'' formally proscribe wholesale comment generation. ] (]) 22:27, 7 December 2024 (UTC) | |||
*::::::That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. ] (]) 04:14, 8 December 2024 (UTC) | |||
*'''Support''' ] ] (]) 18:30, 6 December 2024 (UTC) | |||
* '''Comment''': The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of ''what''? {{--}} <span style="font-family:cursive">]]</span> 19:32, 6 December 2024 (UTC) | |||
*:That means that supporters support the proposal {{tq|that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. ] ] 19:40, 6 December 2024 (UTC) | |||
*:Doing something? At least the stage could be set for a follow on discussion. ] (]) 19:40, 6 December 2024 (UTC) | |||
*:More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. ] (]) 00:58, 7 December 2024 (UTC) | |||
*::Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. ] ] 09:00, 7 December 2024 (UTC) | |||
*:::If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. ] (]) 09:05, 7 December 2024 (UTC) | |||
*::::Ok, here's a list of the main opposition reasonings, with individual responses. | |||
*::::'''What about translations?''' - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters. | |||
*::::'''What about accessibility?''' - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above. | |||
*::::'''Detection tools are inaccurate''' - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that. | |||
*::::'''False positives''' - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule. | |||
*::::'''LLM content should be taken at face value and see if it violates some other policy''' - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to and . Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Misplaced Pages discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else. | |||
*::::I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. ] ] 10:08, 7 December 2024 (UTC) | |||
*:::::There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. ] (]) 11:00, 7 December 2024 (UTC) | |||
*:::::The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. ] (]) 02:22, 10 December 2024 (UTC) | |||
*::::::{{tq|it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions}} - by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Misplaced Pages (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? ] ] 09:16, 10 December 2024 (UTC) | |||
*:::::::You can't. That's the point. This is kind of the whole idea of ]. ] (]) 20:22, 13 December 2024 (UTC) | |||
*:::{{tqb|Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.}}Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. ] (]) 23:46, 8 December 2024 (UTC) | |||
*::::While I sympathise with that hypothetical, ] and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. ] ] 18:45, 10 December 2024 (UTC) | |||
*:::::We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. ] (]) 19:06, 10 December 2024 (UTC) | |||
*::::::{{tq|The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project}} - the same could be said for ]. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. ] ] 09:07, 11 December 2024 (UTC) | |||
*:::::I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. ] (]) 02:47, 11 December 2024 (UTC) | |||
*::::::I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. ] ] 08:52, 11 December 2024 (UTC) | |||
*:::::::Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. ] (]) 14:10, 11 December 2024 (UTC) | |||
* '''Support''' strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to ] and similar (in terms of ESL communication) without risking hallucinations by AI. ] (]) 02:52, 7 December 2024 (UTC) | |||
* '''Oppose the use of AI detection tools'''. False positive rates for AI-detection are dramatically higher for non-native English speakers. . ~ ] (] • ]) 17:53, 8 December 2024 (UTC) | |||
===Section break 5=== | |||
* '''Oppose''' - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. ] (]) 01:12, 9 December 2024 (UTC) | |||
* '''Oppose''' – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). ] (]) 10:27, 9 December 2024 (UTC) | |||
* '''Comment''' / '''Oppose as written'''. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Misplaced Pages talk page posts, but not fine with any recommendations of LLM-detectors. ] (]) 20:00, 9 December 2024 (UTC) | |||
*:@], I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). ] (]) 01:47, 11 December 2024 (UTC) | |||
*:::I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was ''that'' minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and ''forbid'' using alleged "AI detectors" instead. ] (]) 03:49, 11 December 2024 (UTC) | |||
* '''Support''' collapsing AI (specifically, ]) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a ]" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Misplaced Pages editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Misplaced Pages, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. ] <sup>]]</sup> 05:10, 10 December 2024 (UTC) | |||
*:Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). ] <sup>]]</sup> 05:17, 10 December 2024 (UTC) | |||
*:Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline ]y. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. ] <sup>]]</sup> 06:38, 10 December 2024 (UTC) | |||
* '''Support''' (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the ''context'', not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — <small> ] <b>∣</b> ]</small> 15:41, 10 December 2024 (UTC) | |||
*'''Strong Support'''. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. ] (]) 18:06, 10 December 2024 (UTC) | |||
*:Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. ] (]) 18:25, 10 December 2024 (UTC) | |||
*::Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have ] for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. ] (]) 21:54, 11 December 2024 (UTC) | |||
*:::There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. ] (]) 22:03, 11 December 2024 (UTC) | |||
*::::To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of ], and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. ] (]) 23:26, 11 December 2024 (UTC) | |||
*:::::You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. ] (]) 00:42, 12 December 2024 (UTC) | |||
*:::::I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. ] (]) 01:43, 12 December 2024 (UTC) | |||
*::::::No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. ] (]) 15:59, 12 December 2024 (UTC) | |||
*:::::::It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like demonstrate that there will likely be cases where machines outperform humans in specific Misplaced Pages domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. ] (]) 16:25, 12 December 2024 (UTC) | |||
*:::::::But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. ] (]) 18:53, 12 December 2024 (UTC) | |||
*::::::::The proposal specifies content ''generated'' by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. ] (]) 01:52, 13 December 2024 (UTC) | |||
* '''Yes''' in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Misplaced Pages. ] ] 19:55, 10 December 2024 (UTC) | |||
* '''Support'''. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. {{pb}} By the way. {{tq|gptzero and other such tools are very good at detecting this}}. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –] <small>(])</small> 03:23, 11 December 2024 (UTC) | |||
*'''Question''' @]: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? ] (]) 14:10, 11 December 2024 (UTC) | |||
* '''Support'''. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. ]]] 14:16, 11 December 2024 (UTC) | |||
*:This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. ] (]) 14:26, 11 December 2024 (UTC) | |||
*::Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. ] ] 19:20, 11 December 2024 (UTC) | |||
*::I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. ]]] 22:16, 17 December 2024 (UTC) | |||
*'''Oppose''' {{re|Just Step Sideways}} The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":{{pb}}{{tqb|I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. <mark>If you can't make a coherent argument on your own, you are not competent to be participating in the discussion.</mark> I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner}}{{pb}}The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? ] (]) 17:41, 11 December 2024 (UTC) | |||
* '''Support''' with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). ] (]) 19:24, 11 December 2024 (UTC) | |||
*'''Strong support, limited implementation'''. {{tq|1=Misplaced Pages is written by volunteer editors}}, says our front page. This is who ''we'' are, and ''our'' writing is what Misplaced Pages is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. ] (]) 04:11, 12 December 2024 (UTC) | |||
* '''Support in concept, questions over implementation''' — I concur with {{U|Dronebogus}} that users who rely on LLMs should not edit English Misplaced Pages. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. <span style="font-family: monospace;">] (he/him)</span> 22:39, 12 December 2024 (UTC) | |||
*:Many, possibly most, online translators use artificial intelligence based on LLMs these days. ] (]) 22:46, 12 December 2024 (UTC) | |||
*::There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. <span style="font-family: monospace;">] (he/him)</span> 22:59, 12 December 2024 (UTC) | |||
*:::Neither your comment nor the original proposal make any such distinction. ] (]) 23:34, 12 December 2024 (UTC) | |||
*::::Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I ''encourage'' using it (once again, if you aren’t competent in English please don’t edit here) ] (]) 07:34, 13 December 2024 (UTC) | |||
*:LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. ] (]) 00:05, 13 December 2024 (UTC) | |||
*::Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "" API, but it does not incorporate LLMs into ''translating your everyday text input'', which still relies on NMTs. And even for the API features, the core constraint of ''matching'' input rather than ''generating content'' is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the ''generative'' part that is key to this proposal is not present. ] (]) 01:20, 13 December 2024 (UTC) | |||
*:::{{tq|PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.}} If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching ] levels of recombination, to which my answer is it is the same ship.{{tqb|obviously it would be very bad for a translation tool to insert material not found in the original text!}} ] (]) 01:29, 13 December 2024 (UTC) | |||
*::::PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. ] (]) 01:34, 13 December 2024 (UTC) | |||
*:::::Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. ] (]) 02:01, 13 December 2024 (UTC) | |||
*::::::Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a '''''L'''''LM, because the param count is 380M not 1.8T. | |||
*::::::{{tq|the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be}} No, that def would fit ancient ] tech too. ] (]) 17:50, 13 December 2024 (UTC) | |||
*:::::::Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. ] (]) 20:44, 13 December 2024 (UTC) | |||
*::::::::NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, ''not'' to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still ''not'' generative when it comes to translation output. ] (]) 22:29, 13 December 2024 (UTC) | |||
*:::::::::How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? ] (]) 23:30, 13 December 2024 (UTC) | |||
*::::::::::The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. ] (]) 23:56, 13 December 2024 (UTC) | |||
*:{{outdent|9}} Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. ] (]) 00:33, 14 December 2024 (UTC) | |||
*::I think there is consensus that GPTZero is not usable, even for obvious cases. ] (]) 00:55, 14 December 2024 (UTC) | |||
*:::Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. ] (]) 01:18, 14 December 2024 (UTC) | |||
*::::Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. ] (]) 05:26, 14 December 2024 (UTC) | |||
*:::::My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. ] (]) 14:18, 14 December 2024 (UTC) | |||
*::::::My guess is that the copyediting went beyond what most people would actually call "copyediting". ] (]) 18:04, 23 December 2024 (UTC) | |||
*:::::::It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. ] (]) 00:15, 24 December 2024 (UTC) | |||
* On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. <span class="nowrap">] <sub>]</sub> <sup>(] • ])</sup></span> 23:21, 12 December 2024 (UTC) | |||
*'''Support in principle''', but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides ] and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. <span style="font-family:Papyrus, Courier New">]</span><sup><span style="font-family:Papyrus"><small>'']''</small></span></sup> 05:17, 13 December 2024 (UTC) | |||
*'''Oppose'''. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per ] are good, but totally banning them is plain wrong. ] ] 22:56, 15 December 2024 (UTC) | |||
*:@], the proposal is to permit editors to collapse/strike ''obvious LLM text'', not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. ] (]) 20:17, 19 December 2024 (UTC) | |||
*'''Support''' with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons: | |||
*# We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive. | |||
*# Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded. | |||
*# If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear. | |||
*# It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy ], "{{tq|treat your fellow editors as respected colleagues with whom you are working on an important project.}}" | |||
* Also, I don't read the proposal as a ban on machine translation in any way. ] (]) 00:01, 18 December 2024 (UTC) | |||
*:@], above @] said their !vote was created by LLM. ] (]) 20:14, 19 December 2024 (UTC) | |||
* I am '''strongly opposed''' to banning or ignoring LLM-made talk page comments ''just'' because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Misplaced Pages in English or in any other language. However, some of those things ''are'' useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Misplaced Pages. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Misplaced Pages, used an LLM to improve their writing style, and got their responses ignored ''only'' because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Misplaced Pages be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, ''by itself'', a sign of bad faith. --] (]) 04:52, 19 December 2024 (UTC) | |||
*:Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) ] (]) 07:44, 19 December 2024 (UTC) | |||
*::@] The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, ''unless'' it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. <span style="font-family:cursive">]]</span> 23:41, 19 December 2024 (UTC) | |||
*:::The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. ] (]) 02:49, 20 December 2024 (UTC) | |||
*'''Strong support'''. LLM-generated content has no place anywhere on the encyclopedia. ] (]) 10:27, 19 December 2024 (UTC) | |||
*'''Strong oppose''' to the proposal as written. Misplaced Pages already suffers from being ] in a ] ] and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to ] and ] when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "]" will ultimately lead Misplaced Pages the way of ''Encyclopædia Britannica''. Our discussion needs to be how best to change, not how to avoid to change. <span style="font-family:cursive">]]</span> 23:54, 19 December 2024 (UTC) | |||
*:{{smalldiv|1=The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. ] (]) 01:31, 20 December 2024 (UTC)}} | |||
*::True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. <span style="font-family:cursive">]]</span> 01:52, 20 December 2024 (UTC) | |||
*'''Support'''. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of {{em|everything}} around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. {{U|JoelleJay}} has it exactly right: {{tq|having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments.}} And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's {{em|own}} material onto another language.<!-- --><p>I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because ] is required. This is work (albeit volunteer work), it is ] a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – {{em|must}} have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a higherly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgment ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).</p><!-- --><p>In short, if you {{em|need}} an LLM to give what you think is meaningful input into a decision-making process on Misplaced Pages (much less to generate mainspace content for the public), then you {{em|need}} to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:49, 22 December 2024 (UTC)</p> | |||
*:The problem with all that is that we ''already'' have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. ] (]) 11:39, 22 December 2024 (UTC) | |||
*:People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. ] (]) 15:26, 22 December 2024 (UTC) | |||
*:ok, I agree with @], so therefore my vote is '''Support.''' ] (]) 12:41, 25 December 2024 (UTC) | |||
*'''Support'''. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the ''content'' is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really ''you''.--] (]) 11:27, 23 December 2024 (UTC) | |||
* '''Comment'''. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are: | |||
:# Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location. | |||
:# Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality. | |||
:# Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred. | |||
: These are my starting views. ] (]) 00:37, 24 December 2024 (UTC) | |||
::A couple of days ago, an editor ran a new article through GPTzero, and was told that it was 99.8% likely to be AI-generated. I ran the same revision of the same article through the same tool the next day, and it told me that it was 98.3% likely to be human-written. | |||
::Now we're left scratching our heads: Why the difference? Which answer is correct? ] (]) 04:36, 26 December 2024 (UTC) | |||
*'''Strong oppose''' as it's impossible to enforce. Also LLMs are a valid and useful ] tool. – ] 05:18, 24 December 2024 (UTC) | |||
*'''Bonus suggestion!''': I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ]). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to ] for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol ] (]) 06:12, 25 December 2024 (UTC) | |||
*:I think that the general idea here is good: As much as possible, treat each account individually, and handle LLM use as a behavioral problem when it's actually a practical problem (i.e., not when it seems to be accurate and seems to be appropriate). ] (]) 04:38, 26 December 2024 (UTC) | |||
* '''Strong support''' the removal of any obvious, low effort AI-generated post. I recently came across a user such examples. When called out on it and posted a comment saying, amongst other things "''HAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHA. Yes, some of it might be. Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people.''" and "''YOUUUUUUUUUUUUUUUUUUUUUUUUUUUU are assuming bath faith in me.''" | |||
:They were later blocked as a sock evading a global lock. | |||
:Currently it is too easy for trolls to game ] and AI to waste people's time arguing with their bot-generated replies. Using AI to write your posts for you makes it difficult for others to assume good faith. I am ok with obvious exceptions like a non-native speaker using AI to help them articulate their point. ] (]) 21:29, 25 December 2024 (UTC) | |||
* '''Support''' for any and all reasons above. I'd be in favor of entirely banning AI-written text on the platform in articlespace as well. | |||
=== Alternate proposal === | |||
::::There is no such thing as a fair use right. Fair use is only a defense against copyright infringement. Almost no use is 'fair' until it is challenged in court and upheld as such. There is nothing that prevents content owners/distributors from taking away your supposed rights. DRM does precisely this when it restricts you from copying a song onto your iPod (the court-established fair use of 'space shifting'). It is precisely this legal ambiguity which is such a bane to the goal of creating a 💕. Anyone seeking to freely use en content has to weigh the costs of verifying the free status of such content. As Lessig points out in Free Culture, independently produced movies sometimes have to edit out scenes featuring use that would clearly be considered 'fair'. When they sign a distribution deal, the movie's creators are required to enumerate EVERY shot that could conceivably constitute any kind of copyright infringement risk. They are then forced to buy copyright insurance, in case claims were ever brought against the movie. In this case, the very risk that content MIGHT not constitute fair use renders it completely unusable. ] 04:55, 24 April 2006 (UTC) | |||
{{discussion top|result=Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:23, 22 December 2024 (UTC)}} | |||
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that '''remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden'''. ] <sup>] / ]</sup> 10:00, 10 December 2024 (UTC) | |||
:'''Oppose''' as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that ''this is already policy'' so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which ''will'' happen, regardless of whether tools are used). ] (]) 11:19, 10 December 2024 (UTC) | |||
::we give up fair use and Wiki is dead. Indeed, all reference works are dead. Giving up fair use means outsideres control what we are allowed to say about them. It is the oldest and most prized right regarding reference books and the courts have (nearly) always upheld us. The advantages of forfeiture? close to zero. ] 05:07, 24 April 2006 (UTC) | |||
:I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. ] ] 18:32, 10 December 2024 (UTC) | |||
:What does '''blatantly generated''' mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that ''appears'' to be LLM-style? I don't think there's much in between. <span style="font-family:cursive">]]</span> 19:21, 10 December 2024 (UTC) | |||
:'''Procedural close''' per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. ] (]) 03:00, 11 December 2024 (UTC) | |||
{{discussion bottom}} | |||
== Should first language be included in the infobox for historical figures? == | |||
::: While I think we should retain fair use here, if I'm not mistaken, some of the other wikis don't allow fair use and are working just fine. For example (and someone correct me if I'm wrong) the Italian wiki doesn't allow fair use images. There isn't any need for hyperbole. ] 05:10, 24 April 2006 (UTC) | |||
Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. '''In my view, somebody's first language (L1) is just a very basic and useful piece of information''', especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. ] (]) 11:53, 8 December 2024 (UTC) | |||
::::Correct. Most foreign language wikis attempt to conform both to the laws of the US (where the servers live), and the laws of the most prominent country or countries speaking that language, so that wiki content can be easily reused there. Most countries outside the US don't have a ] provision; instead, many European countries rely on the more stringent notion of ]. Also, I think some wikis may, like Commons, allow neither fair use nor fair dealing, so that only truly copyright free content is allowed. ] 05:20, 24 April 2006 (UTC) | |||
:::::I think the bottomline of this discussion is that we should exercise tight control over the fair use claims (such as only allowing a fixed set of fair use templates, screenshots, Nazis, etc., with strict patrolling for misplaced templates) and attempt to minimize the fair use images used. Maybe it should be policy that if there is a free/copyleft image and a fair use image both illustrating the same subject, we will discard the unfree image even if it is of superior quality. We should aim at having articles that would still be good if they were stripped of nonfree images (for example for the purpose of a derivative product, future "purist" fork, WP 1.0 on CD, etc.) ] <small>]</small> 13:58, 26 April 2006 (UTC) | |||
:I don't think I agree this is a good use of infobox space:{{bulleted list|incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)|for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition) |sometimes L2 skills will be more than adequate to communicate with subject population when called for|in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter|prose description seems adequate}}However, this is just my opinion, and the venue of discussion should probably be ] or similar, rather than VPP. ] (]) 12:02, 9 December 2024 (UTC) | |||
I just think there are way too many supposedly 'fair use' images that are used for strictly illustrative (rather than critical, comparative, or instructional) purpose, ESPECIALLY on articles discussing aspects of popular culture or media. I would go so far as to say that a copyrighted album cover is unnecessary (and in fact detrimental for the legal ambiguities it introduces), unless of course there is something notable about the cover art which is discussed in the article ] 19:28, 26 April 2006 (UTC) | |||
:I think this ''might'' be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. ] (]) 12:43, 9 December 2024 (UTC) | |||
:Retaining the fair-use limits is non-negotiable as removing that limitation has the potential to disrupt future derivative projects and in contradiction with the ]. ] <small>] • ]</small> 00:09, 29 April 2006 (UTC) | |||
::It ''might'' also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. ] (]) 16:29, 9 December 2024 (UTC) | |||
:This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: ]. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion (]) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for ''the person as a whole''. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for ''forty years''. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. ] (]) 19:47, 9 December 2024 (UTC) | |||
:::See below on this. ] (]) 14:26, 11 December 2024 (UTC) | |||
::SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. ] (]) 19:56, 9 December 2024 (UTC) | |||
:::No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. ] (]) 21:08, 9 December 2024 (UTC) | |||
::::But it kind of is here? I picked this example as maybe one of the most obviously ''relevant'' cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in ''this'' case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. ] (]) 21:53, 9 December 2024 (UTC) | |||
:::::It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. ] (]) 00:02, 10 December 2024 (UTC) | |||
::::::Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? <sub>signed, </sub>] <sup>]</sup> 15:06, 11 December 2024 (UTC) | |||
::Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. ] ] 21:35, 9 December 2024 (UTC) | |||
::Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking ]. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. ] (]) 00:39, 10 December 2024 (UTC) | |||
:::(This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly ''enough'' - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). ] (]) 16:23, 10 December 2024 (UTC) | |||
::::There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like ] and his father ] (both from ], now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. ] (]) 14:26, 11 December 2024 (UTC) | |||
:It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. ] (]) 21:20, 9 December 2024 (UTC) | |||
::I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures. | |||
::I agree that this is important when language is the basis of community identity, such as in Belgian. ] (]) 10:38, 10 December 2024 (UTC) | |||
*Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. ] (]) 18:16, 11 December 2024 (UTC) | |||
:I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. ] (]) 14:33, 12 December 2024 (UTC) | |||
Fair use is an absolute must. Kill it off and you kill off Misplaced Pages. We should be living within the law, not making up our own rules that are more restrictive than the law. That makes no sense whatsoever. ]]\<sup><font color="blue">]</font></sup> 14:33, 1 May 2006 (UTC) | |||
::{{tl|Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox. | |||
::{{tl|infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{tl|Infobox writer}} has a <code>|language=</code> parameter, and it would be surprising if this were unused. ] (]) 19:36, 12 December 2024 (UTC) | |||
:::Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. ] (]) 10:23, 13 December 2024 (UTC) | |||
:I don't think L1 can be determined for most historical figures without a hefty dose of ]. If you look at ], you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and ] never says what language they spoke. ] even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. ]) and infobox bloat. ] </span>]] 21:21, 12 December 2024 (UTC) | |||
::As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for <kbd>"first language was"</kbd> and on the first page of results found sourced mentions of first language at ], ], ], ], ], ] and ], and an unsourced but plausible mention at ]. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. ] (]) 22:08, 12 December 2024 (UTC) | |||
::I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. ] (]) 11:31, 13 December 2024 (UTC) | |||
:::I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at ] certainly suggests it was already a well-established concept in her era (1st century BCE). ] (]) 13:23, 13 December 2024 (UTC) | |||
::::The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. ] (]) 16:05, 13 December 2024 (UTC) | |||
:::::It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. ] (]) 21:54, 13 December 2024 (UTC) | |||
::::::The question is not whether it is appropriate in any cases; the question is whether it is worth the trouble. I guarantee that this would lead to many vicious debates, despite being in most cases an irrelevant and unverifiable factoid based on inappropriate ]. This is the same reason we have ]/NATIONALITY. ] </span>]] 07:29, 16 December 2024 (UTC) | |||
:Nah. If this were "a very basic and useful piece of information" then we would already be deploying it site wide, so it obviously is not. In the vast majority of cases, it would involve intolerable ] or even just guessing masquerading as facts. We do not know for certain that someone born in France had French as their first/native/home language. I have close relatives in the US, in a largely English-speaking part of the US, whose first language is Spanish. For historical figures it would get even more ridiculous, since even our conceptions of languages today as, e.g., "German" and "French" and "Spanish" and "Japanese", is a bit fictive and is certainly not historically accurate, because multiple languages were (and still are, actually) spoken in these places. We would have no way to ascertain which was used originally or most natively for the average historical figure. Beyond a certain comparatively recent point, most linguistics is reconstruction (i.e. educated guesswork; if there's not a substantial corpus of surviving written material we cannot be sure. That matters a lot for figures like Genghis Khan and King Bridei I of the Picts. Finally, it really is just trivia in the vast majority of cases. What a biographical figure's first/primary/home/most-fluent/most-frequently-used language (and some of those might not be the same since all of them can change over time other than "first") is something that could be included when certain from RS, but it's not lead- or infobox-worthy in most cases, unless it pertains directly the subject's notability (e.g. as a writer) {{em|and}} also isn't already implicit from other details like nationality. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 03:42, 23 December 2024 (UTC) | |||
== Restrict new users from crosswiki uploading files to Commons == | |||
Kirill Lokshen and rjensen are absolutely right. Furthermore, if there is really a great need for a Misplaced Pages compatible with the laws of every country in the world, then maybe an automatic script can be used to generate a Misplaced Pages Censored Version specifically for those applications. Just strip out every image with a Fair Use tag. While you're at it, you can strip out troubling usages of terms like "Tiananmen" and "Jesus", and so on... Doesn't it make more sense to start with a Misplaced Pages that can be dumbed down on command than one which would have to become the object of a brand new US-Wiki-Fairuse project to spruce it back up again? Also, if people lie about whether content is "fair use", they could lie about whether it was "public domain". The industry has gotten everything it's asked for, including a power to send 48-hour DMCA take-down notices whenever they want - so isn't that level of blind obedience from Misplaced Pages sufficient? ] 05:18, 6 May 2006 (UTC) | |||
: (arriving late to the conversation) Unfortunately, while I enjoy the images on Misplaced Pages, I sometimes think they're more trouble than they are worth. Note, as just one example, the edit war over the lead image at ] that occurred recently. A similar case of musical images occurred at ]. Then you have people using Wiki-webspace as their own personal storage site for images. But on top of it all there is also the fact that there are so many restrictions on Fair Use that it is making it very difficult to find images of anything that fits the criteria. For example, I am presently involved in an edit war (not really - I won't go to 3RR with it) at an article about a minor model. The issue - the fact that the illustration for the article is an example of her work -- a magazine cover. To remove the magazine cover pretty much removes any need for there to be an image on the page. Yet the image tag for magazine covers has been revised at some point recently to say that magazine covers can only be used when discussing the magazine, not the subject matter of the cover (or its photographer, for that matter). And I've already informed the user wanting to delete the magazine cover that he'll need to delete a bunch more in <nowiki>]</nowiki>. (He subsequently replied that this is indeed his plan - to remove all magazine images from Misplaced Pages that aren't specifically used in articles about said magazines. This upset me enough that I was about to put forward a policy suggestion to remove images in toto from Misplaced Pages. It just slows down things anyway -- think of the bandwidth this place would save if we went to all-text. And everyone in the copyright police would be happy ... sorry for sounding sarcastic, but we editors put in a lot of time and effort to find images that make this place worth using as a resource, with no intention of making monetary gain of any of them, and all we get for our trouble is people saying "sorry, you can't use it." To heck with it. I'm tempted to take down every image I've uploaded to this place. ] 04:07, 8 May 2006 (UTC) | |||
I created this Phabricator ticket (]) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus ] in ]. I created an ], which was then rejected, i.e. "archived", as policy-related and {{tq|requir alignment across various wikis to implement such a policy}}. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. ] (]) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC) | |||
:] has been in its current version since 02:30, 31 January 2006. There is no need to go all text under any conditions. We have many GFDL and PD images.] 04:42, 8 May 2006 (UTC) | |||
* '''Support'''. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. ] (]) 06:41, 9 December 2024 (UTC) | |||
:* '''Support''', I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited. | |||
:''arriving very late'' - I believe it is aquestion if it is for documentation purpose, or if it is really underminig a commercial service. Pictures of public available objects, in low resolution, are not very protectionworthy. Someone can take a picture of the eiffel tower easily. Nike sneakers (sorry) are in virtually 1000's of shops. Copyright restrictions are most likley there to sharply prohibit defacing usage (this is not scientific/documentary). <u>This is my opinion of fair usage</u> (not to take it literally). I am sourcing lots of things personally, i do not mind about things which are anyway provided for no charge, but might get unavailable at some point of time. However i know this is not allowed officially, even prohibited. I do include a visible tag of news agencies, in a way their representants would never do. In 70 years, these files become PD probably. Some things like egyptology are not meant to be a commercial stream, i believe sourcing from this is ''fair use'' in a meaning of sense. PD allows defacing etc., fair use does not. Check latest gas prices (political remark). See fansites/BBS (they source a lot of copyrighted data, this is tolerated, unwritten rules apply) - ] 8 may 2006 | |||
:] (]) 02:14, 10 December 2024 (UTC) | |||
*'''Support''' As this applies to images being posted to Commons, but by a method that side steps their wishes, I don't see why another wiki should stand in the way. -- <small>LCU</small> ''']''' <small>''«]» °]°''</small> 16:54, 10 December 2024 (UTC) | |||
*'''Support'''. I do think that disabling the ability for new editors on the English Misplaced Pages from engaging in crosswiki uploads to Commons would be a net positive; the Commons community has come to this conclusion several times, and the research confirms that cross-wiki uploads by new users cause more trouble than the good uploads worth. — ] <sub>]</sub> 00:36, 11 December 2024 (UTC) | |||
*'''Support''' Way too low signal-to-noise ratio; most of these images are copyvios or otherwise useless. -- ]]]] 01:12, 11 December 2024 (UTC) | |||
*'''Support''' like the above editors. Much spam, many copyvios, few good images.—] 15:47, 11 December 2024 (UTC) | |||
*I don't think this should be any sort of enwiki policy. If commonswiki wants to restrict something that should be up to them. I can't possibly see how it would need to be specific to the English Misplaced Pages (i.e. but not about new users on dewiki, eswikt, etc). — ] <sup>]</sup> 16:19, 11 December 2024 (UTC) | |||
*:As noted by George Ho above, ]. The question is whether or not we want the English Misplaced Pages to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — ] <sub>]</sub> 19:50, 11 December 2024 (UTC) | |||
*::] interests me: "Interestingly, we found that most uploaders were either ''marketers'' (editing/uploading on behalf of another entity such as their employer), or they were ''self-promoters'' (creating pages about themselves, unaware of the "notability" requirement)." | |||
*::So I wonder whether, instead of stopping this, we want a bot to look at newbies who create articles/drafts, check whether they uploaded something, and then tag both the image(s) and the pages here with a note that says something like "There is a 90% chance that this has been posted by a marketer or self-promoter", with suitable links to pages such as ]. Or maybe even a ] process. | |||
*::On the question of what to do, it should be possible to hide the cross-wiki upload button. The real question is, do we replace it with a link to ]? The Commons POV has been that it's bad for people to upload images within the visual editor, but okay for the same person to upload the same image with the UploadWizard. I'm not sure the net result is actually any different, especially for these marketers/self-promoters (in terms of net quality/acceptability; from Commons' POV, it's better because (a lot? a little?) fewer of them will click through to upload anything at Commons). ] (]) 19:49, 12 December 2024 (UTC) | |||
*'''Support''' Nearly every single thing I've ever put up for deletion at Commons has been stuff uploaded to spam en.wp. It never stops. ] ] 19:55, 11 December 2024 (UTC) | |||
*:Is this still happening? According to @] this is already blocked. — ] <sup>]</sup> 20:52, 11 December 2024 (UTC) | |||
*::Yes, it's still happening. Such uploads include from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — ] <sub>]</sub> 21:05, 11 December 2024 (UTC) | |||
*:::It is for sure still happening, I've nominated a few in just the past week. ] ] 22:26, 11 December 2024 (UTC) | |||
*::It's still happening. A lot of them go to the uncategorized backlog which has well over 100,000 things in it so they get overlooked. ] (]) 19:18, 12 December 2024 (UTC) | |||
*:::If anyone wants to help with that, then click on ]. Figure out what the image is (Google Lens or TinEye searches can help; go to ] and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that ] is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. ] (]) 20:02, 12 December 2024 (UTC) | |||
*::::I got another one just now . This really can't happen fast enough. ] ] 23:51, 12 December 2024 (UTC) | |||
* '''Support''' It's honestly kinda dumb that we have to have this whole other consensus process after the prior one just because people at Meta-wiki don't want to implement it. ]]<sup>]</sup> 20:35, 13 December 2024 (UTC) | |||
* '''Support''', since this has already been decided and WMF is just being recalictrant. ], and Commons isn't one either, nor is Wikitionary, etc., and to the extent WMF wants to be one that needs to be nipped in the bud. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 03:44, 23 December 2024 (UTC) | |||
==Question(s) stemming from undiscussed move== | |||
*I believe it would both help keep the community running well by limiting our interaction with laws that the community has proven to not understand very well and help us produce an encyclopedia with genuinely free content to eliminate our use of fair use, either with a grandfather clause or without. --] 17:24, 12 May 2006 (UTC) | |||
*:I agree with Kirill Lokshin. However, the two new CSDs for fair use images do constitute a tightening of the rules. If you take out fair use altogether, people will still keep uploading images, they'll just tag them as {{tl|NoRightsReserved}} or some other tag that'll keep the bots off them; there are so few people on new upload patrol that it's trivially easy to bypass most restrictions anyway. | |||
*:Just a couple of other comments: fair use is definitely being unfairly abused at the moment. The most egregiously bad category is ], followed closely by ]. Magazine covers can only be used under fair use in an illustration of or critical commentary on ''the magazine'', not the person on the cover. Web screenshots, similar, except replace "the magazine" with "that website". | |||
*:Overall, we don't need tighter fair use policies, we need to enforce the policies we have. ] (]) 08:50, 15 May 2006 (UTC) | |||
**It is NOT true that fair use of magazine covers requires a commentary of the magazine. In history articles they are used to demonstrate the importance of the person (as TIME person of the week). That is fair use. ] 10:11, 15 May 2006 (UTC) | |||
"AIM-174 air-to-air missile" was moved without discussion to "]." After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per ]. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- ] (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to , where an additional "no consensus" (to overturn) finding was issued. As ] pointed-out during the move review, ''{{tq|"'''I take issue with the participating mover's interpretation of policy''' 'Unfortunately for you, a no consensus decision will result in this article staying here' '''in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr.'''"}}'' | |||
I think we should keep fair use. A lot of movie posters, etc. can't be replaced by non-fair use images (I'm 99.9% sure on that). --] | ] | ] | ] 04:26, 21 May 2006 (UTC) | |||
The issues, as I see them, are as-follows: | |||
I strongly support use of fair use. Thanks to it en-Misplaced Pages is much better illustrated then the more restrictive Wiki, and I have yet to see any serious problems stemming from it. We are worrying way to much about problems that have not happened yet. Once in every - what? thousand? - images we get a letter requesting that the image is to be removed. We do so, or negotiate a settlement that makes both parties happy. --] <sup><font color="green">]</font></sup> 20:25, 23 May 2006 (UTC) | |||
'''WP:RMUM:''' The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus." | |||
:Compare images on these English Misplaced Pages articles ], ], and ] to those on the German Misplaced Pages (], ], and ]). You'll see what a big difference Fair Use makes. ] 20:30, 24 May 2006 (UTC) | |||
'''Burden of Proof:''' The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me. | |||
This is something that isn't widely understood on Misplaced Pages yet, nor is there a policy page yet addressing it, but without fair use, we couldn't substantively describe the story or characters from any work of fiction. Fair use is not just about pictures or verbatim quotes of text. ] 17:34, 29 May 2006 (UTC) | |||
'''Precedent:''' I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus (]) is found? | |||
:Quite correct. From my understanding, the German Misplaced Pages ''does'' permit the ''German equivalent'' of fair use, which happens to be much narrower. It bars fair use mainly, as far as I know, because it might be illegal for Germans to upload and view things that are illegal under German law but legal under US law. (The German copyright policy page says something to the effect of "Fair use is an American thing, so we don't use it", not "Fair use is unfree, so we don't use it".) Banning fair use entirely would be impossible if we want to have articles about copyrighted works. —] (] • ]) 02:38, 30 May 2006 (UTC) | |||
'''Argument in-favor of "AIM-174:"''' See ] for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: ], ], ], ], ], ], ], ], ], ], ], ], ], ]. 174"B" is unnecessary while violating consistency. | |||
== Use of American v. British/Commonwealth English == | |||
'''Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?''' | |||
A vast majority of native English speakers speak American, not British (or Commonwealth) English. For this reason, to the extent that spelling is to be standardized or when a dispute arises, it makes sense to use American English. | |||
TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, ]. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for ''months''. I also apologize for the delay in getting this here; I originally but ] kindly suggested I instead post here.] (]) 00:08, 12 December 2024 (UTC) | |||
I note American English is much more widely used on the Internet. A Google search for color gets 1,370 million hits, colour gets 231M. Standardize gets 26 million, standardise gets 2.3 million. Favorite gets 1.2B, favourite gets 275M. In general the American spelling is at least five times more used than the British spelling. It isn't even close! | |||
:Are you familiar with ]? Do you ]? ] (]) 23:31, 12 December 2024 (UTC) | |||
I bring this up because in the dispute between using the American and British spellings for aluminum for the title of the entry, the British varient won out. I think this was a mistake. | |||
::I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? ] (]) 01:11, 13 December 2024 (UTC) | |||
:::If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.) | |||
:::If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. ] (]) 05:24, 13 December 2024 (UTC) | |||
::::I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as ], as you say.<br />(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate) <br />However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as ] mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision. | |||
::::<br /> | |||
::::And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past? | |||
::::<br /> ] (]) 06:29, 13 December 2024 (UTC) | |||
:::::Given what was written in ] six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and ''maybe'' start a new ] (e.g., in 2026). ] (]) 19:41, 13 December 2024 (UTC) | |||
::::::Thanks! ] (]) 05:09, 14 December 2024 (UTC) | |||
:Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. ] (]) 06:10, 13 December 2024 (UTC) | |||
::I suppose my issue is better summed-up by my above discussion with ]; The MRV ''shouldn’t '' have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per ]. Otherwise, undiscussed moves — when challenged — may now be upheld by ''either'' consensus ''or'' no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. ] (]) 06:33, 13 December 2024 (UTC) | |||
:::You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. ] (]) 09:22, 13 December 2024 (UTC) | |||
::::I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) ] (]) 05:08, 14 December 2024 (UTC) | |||
:(Involved at the MRV) Seeing as I've been tagged in this multiple times and quoted, I'll give my thoughts on this. I don't want to accuse ] of selectively quoting me but I do think that my quote above was, when taken into account with the following discussion, more about meta-conversation about the correct policy to implement in the event the MRV went the other way. I explicitly said in the immediately following message {{tpq|the view that the close was not outside the scope of ] is reasonable and good faith interpretation.}} I do think this close was within bounds, and the following MRV appropriately closed and summarised. | |||
:Yes, had stood, then it could have been reverted wholecloth. and therefore plays no role in the consideration of the subsequent closure. We're always going to take the most recent finding of consensus to be what holds. It seems to have been said in the above that had the no consensus closure held and the appropriate ] policy been applied, then the appellant here would have gotten their preferred outcome. But to continue to argue this in the face of the subsequent developments is where this enters ] territory. I think that since then, the appellant has continued to make policy arguments that would be better suited for a subsequent and focused RM on the actual title rather than wikilawyer about a previous close that was self-reverted and continuing to argue policy. | |||
:There's nothing for this venue to really change in regards to that AT and the discussion to change the AT would need to be had at the articles talk page. My sincere advice to appellant is to wait a reasonable amount of time and make strong policy based arguments about the preferred title (don't just quote policy, we editors are good at clicking links and reading it for ourselves—quoting nothing but policy back at us makes us feel like you've taken us for fools; instead provide facts and sources that support the relevant policies and link those). Spend some time at ] and see what well-argued and successful RMs typically look like. ] (]) 17:38, 17 December 2024 (UTC) | |||
== CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references == | |||
I general, I think the best guide when it comes to a question of spelling is usage, and the American spelling of a word will virtually always be the most used. So I propose either a policy that explicitly defers to the dominant dialect of English (American English) or a policy based on usage. | |||
{{Archive top|status=withdrawn|result=Per the solution to the problem already being covered by ], something I was unaware of when I made the proposal. ] (]) 15:49, 14 December 2024 (UTC)}} | |||
] 23:47, 2 May 2006 (UTC) | |||
When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing ] information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to ] the articles is inefficient. I recommend designating ] for the speedy deletion of these articles. I have created a template (]) if it is successful. A recent example is the article on the ], where the author explicitly ] that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. ] (]) 21:13, 12 December 2024 (UTC) | |||
:Peace and harmony is best maintained, by deliberately avoiding a standard, and going by the "leave it be philosophy". Most articles apply entirely or mainly to one particular country, and that country's usage prevails. Elsewhere, whatever standard was used when the article was created, should be stuck with. --] 00:02, 3 May 2006 (UTC) | |||
:The general consensus on this is that either is acceptable so long as the style of writing remains consistent. However, aluminium is listed as the standardized form of the word with the American version as an acceptable variant. Using the first spelling is technically correct. – ] (] | ]) 00:05, 3 May 2006 (UTC) | |||
:CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. <span class="nowrap">--] (])</span> 21:29, 12 December 2024 (UTC) | |||
::Agreed. Also, I don't buy the argument about the "vast majority". Did you figure in ]? Numbers on the Internet are skewed towards richer countries, of course. --] 00:10, 3 May 2006 (UTC) | |||
::I like this idea but agree that it's better not as a CSD but perhaps its own policy page. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 21:33, 12 December 2024 (UTC) | |||
:::Uh… did you even read what he posted? If it's skewed to the "rich countries" then that should be enough. By the way, if all of the native English speakers in India used the internet, then it would STILL be skewed to American spellings. 67.6% (or so) native English speakers come from America. So, your point is incorrect. ] 07:09, 9 May 2006 (UTC) | |||
:::I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, ''and'' the article content sufficiently problematic that deletion is the only appropriate course of action ''and'' it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. ] (]) 22:11, 12 December 2024 (UTC) | |||
::::India does not use English as its official language, and its inhabitants are not considered native English speakers. If they were, however, you can take your sorry >300 million Americans and put them up against the billion Indians who'd be using BrE, throw in every other BrE-native country (the ones you've handily included to make up 1\3 of native English users in the world), and start recounting those percentages. Then we could start counting other countries in which English is a secondary language, and check out which form is preferred there. Suddenly, you'll see AmE is at a somewhat less imposing figure, 25% would propably still be a high estimate, but you get the point. --] 17:47, 9 May 2006 (UTC) | |||
:@], when you say that ] is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? ] (]) 23:32, 12 December 2024 (UTC) | |||
::::And if I may ask, what ''exactly'' did you mean by "If it's skewed to the "rich countries" then that should be enough."?--] 17:49, 9 May 2006 (UTC) | |||
::My view is that ] inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. ] (]) 00:03, 13 December 2024 (UTC) | |||
:::Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster. | |||
:::Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence. | |||
:::Efficiency and speed is something that many editors like. However, there has to be a balance. We're ] to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. ] (]) 00:43, 13 December 2024 (UTC) | |||
* This part {{tq|as the subjects themselves are sometimes real and notable}} is literally an inherent argument against using CSD (or PROD for that matter). ] the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. ]]<sup>]</sup> 00:07, 13 December 2024 (UTC) | |||
: The general consensus is that articles about people/subjects that would be English use the English spelling, and people/subjects that would be American take the American spelling. Other articles stay with what they start as. We explicitly don't favor one over the other as this is an international project and it would be difficult and ethnocentric, to say the least, to enforce American spellings. Also, I don't think that a Google search can validate the greater usage of American English: certainly in Europe, Africa, and Asia, British English is more common. ] ] 02:09, 3 May 2006 (UTC) | |||
*:The categorization of my proposal as {{tq|admitting to vandalism}} is incorrect. ], the speedy deletion criterion I initially used for the article, specifies deleting articles that {{tq|would need to be ''fundamentally'' rewritten to serve as encyclopedia articles}}. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. ] (]) 00:42, 13 December 2024 (UTC) | |||
*::Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. ] (]) 00:45, 13 December 2024 (UTC) | |||
*::(Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states {{tq|If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion.}} I.e. TNT it to a single sentence and problem solved. ]]<sup>]</sup> 00:46, 13 December 2024 (UTC) | |||
*The venue for proposing new criteria is at ]. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. ] (]) 00:20, 13 December 2024 (UTC) | |||
*:Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify. | |||
*:However, Deletion can be held off as a potential ''second'' step, pending a proper BEFORE check. ] (]) 01:06, 13 December 2024 (UTC) | |||
*::If the hallucination is sufficiently dramatic ("'''Joe Film''' is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{tl|db-hoax}}. ] (]) 05:26, 13 December 2024 (UTC) | |||
*:::I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. ] (]) 12:23, 13 December 2024 (UTC) | |||
:Isn't this covered by ]? ] (]) 20:34, 13 December 2024 (UTC) | |||
Indians do not speak English as their native language, and only a small minority of Indians speak English at all (and they are generally rich enough to afford Internet access.) Yet American spellings still dominate the Internet by better than a five to one ratio. Also, in many countries those who learn ESL learn American English, such as Mexico and Vietnam. | |||
{{Archive bottom}} | |||
: English is "an official" language of India, one of several. It is very important in government and commerce. English is an official language in many former British Empire countries, which at one time accounted for 25% of the world's population, for what it is worth.--] 05:07, 12 May 2006 (UTC) | |||
== AFD clarification == | |||
Regarding aluminum, who exactly decided that the British spelling is the "standard?" IUPAC might be the best group to decide how to spell new elements, but I don't see how they can decide the standard form of an old word like aluminum. Once again, in searching the Internet we find the American spelling is far more common than the British. There is no standard spelling, but there is the more used one, and that is the American spelling. ] 00:45, 3 May 2006 (UTC) | |||
:Read the page. Aluminium was the original spelling. The use of aluminium is technically correct in this instance. I happen to think that spelling and general pronounciation is weird, but this is not my decision. Aluminium is the correct spelling to use. – ] (] | ]) 01:07, 3 May 2006 (UTC) | |||
::Historical usages are irrelevant. The original spelling of ''color'' was ''color'', ''honor'' ''honor'', and so forth for that kind of word (o/ou); the original spelling of ''-ize'' was ''-ize'' (or actually -ιζειν ''-izein''); and for that matter, the original spelling of ''wheat'' was ''hwæte''. '''Currently''', some people spell the name of the thirteenth element ''aluminum'', some ''aluminium''. And anyway, the actual original name was ''alum'''i'''um'', followed by ''alum'''in'''um'', and only after that ''alum'''ini'''um'', according to our article, so your facts are off as well.<p>At any rate, in response to the proposal: pragmatically, we're going to piss a lot of contributors off by snubbing their spelling habits, and the fact that there are more Americans than Brits just means that more American spellings will be put into articles to start with, and due to the policy of not changing them we'll have your one-to-five ratio or whatever. So I don't view this as much of a problem. —] (] • ]) 05:11, 3 May 2006 (UTC) | |||
:::Simetrical, you are wrong when you say that the original spelling of the word "color" was without the "u". In fact, it was the (American) Noah Webster who started spelling the word without the "u", in a bid to simplify the language's orthography. As for the debate, as someone who was born using AmEng, but spent a long time in countries that use BrEng, I am used to both, though I have standardized on AmEng. I say that as long as you are consistent in usage throughout the article, either British English ("the Queen's English") or American English is fine. Let's not make this an opportunity for casting aspersions at the colonials or the colonizers (or "colonisers", if you prefer), or of asserting some mistaken form of superiority (regarding whether or not Indians are native English speakers, for instance). -- ] 15:42, 15 May 2006 (UTC) | |||
::::In actual fact, the word was frequently spelt "color" in Britain long before Webster. There was no standard spelling at this time and words were spelt in many different ways. As an archivist, I have seen many British documents dating from the 18th and 19th centuries spelling this and similar words without the u. -- ] 16:34, 15 May 2006 (UTC) | |||
::::The word was originally spelled <COLOR> ''in Latin''. It then shifted to ''colour'' in early French, with the long ''O'' becoming spelled <ou>. English adopted the French spelling, having received the word from the Normans. Then, a few centuries back, some people on both sides of the pond started spelling it without the ''u'' again (to the extent that there was standard spelling then at all), hearkening back to the old Latin form. By what amounts to dumb luck, one spelling became standard in America, another in Britain (as well as a number of its other colonies/former colonies and, to a large extent, the rest of the world). Essentially the same happened with ''-ise''/''-ize''.<p>This is all from my recollection of the ''OED'' entry, by the way. I could look it up again if you really care. —] (] • ]) 02:06, 16 May 2006 (UTC) | |||
The ] article states that: | |||
This is a perpetual battle, but Misplaced Pages uses '''aluminium''' because this is the ] spelling, accepted for reference works in the English language. We use the spelling '''sulfur''' for the same reason: one apiece to each side of the Atlantic!, although editors should '''please''' create the necessary redirects... ''"redirects are cheap"'' ] ] 15:48, 3 May 2006 (UTC) | |||
{{TQ|If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.}} | |||
Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? ] (]) 04:06, 13 December 2024 (UTC) | |||
:''Aluminium'' is a more consistent spelling; there are many elements ending in -ium, from helium to plutonium. - ] 19:15, 3 May 2006 (UTC) | |||
:Yes. If there is a contested redirect, the article is restored and it is brought to AfD. ] (]/]) 04:34, 13 December 2024 (UTC) | |||
::I think the ideal process is: | |||
::* Have an ordinary discussion on the talk page about redirecting the page. | |||
::* If (and only if) that discussion fails to reach consensus, try again at AFD. | |||
::I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a ], but maybe I'll be lucky and they'll delete it completely"). ] (]) 05:31, 13 December 2024 (UTC) | |||
:::Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. ] (]) 05:48, 13 December 2024 (UTC) | |||
::::I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. ] (]) 08:42, 13 December 2024 (UTC) | |||
:::::It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. ] (]) 09:33, 13 December 2024 (UTC) | |||
::::::It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. ] (]) 19:44, 13 December 2024 (UTC) | |||
:::I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. ] (]/]) 14:58, 13 December 2024 (UTC) | |||
::::Gotcha. ] (]) 15:29, 13 December 2024 (UTC) | |||
:::Looking at the above comments: What is it about the ] process that isn't working for you all? If you redirect an article and it gets reverted, why aren't you starting a PM? ] (]) 21:37, 16 December 2024 (UTC) | |||
::::For me, it's lack of participation, no tool to list something at PAM, and no relisting option so proposed merges just sit for a very long time before being closed. ] (]/]) 23:21, 16 December 2024 (UTC) | |||
:::::What voorts said. Multiple times now I've floated the idea of making PAM more like RM, one of these years I should really get around to doing something more than that. I won't have time before the new year though. ] (]) 23:45, 16 December 2024 (UTC) | |||
::::::I think PAM should be merged into AfD, since both generally involve discussions of notability. ] (]/]) 00:00, 17 December 2024 (UTC) | |||
:::::::Merging often involves questions of overlap and topical distinction rather than just notability, although this also ends up discussed at AfD. I do wonder if this would leave proposals to split out in the cold though, as much like merge discussions they just sit there. ] (]) 04:00, 17 December 2024 (UTC) | |||
:::::The most important tool is Twinkle > Tag > Merge. I personally prefer its "Merge to" option, but there's a plain "Merge" if you don't know exactly which page should be the target. | |||
:::::All merges get bot-listed in ]. ] is another place to advertise it, and I'd bet that Twinkle could post those automatically with relatively little work (an optional button, similar to notifying the creator of deletion plans). | |||
:::::I dislike "relisting"; things should just stay open as long as they need to, without adding decorative comments about the discussion not happening fast enough. In my experience, merge proposals stay open because everyone's agreed on the outcome but nobody wants to do the work. ] (]) 06:46, 17 December 2024 (UTC) | |||
:In this context isn't redirection a *type* of deletion (specifically delete while leaving a redirect)? ] (]) 07:05, 17 December 2024 (UTC) | |||
::I would think so. ] (]) 07:33, 17 December 2024 (UTC) | |||
:::It's only a deletion if an admin pushes the delete button. Blanking and redirecting – even blanking, redirecting, and full-protecting the redirect so nobody can un-redirect it – is not deletion. ] (]) 07:34, 18 December 2024 (UTC) | |||
::::That might be clear to you (and the other admins) but almost nobody in the general community understands that (to the point where I would say its just wrong, deletion is broader than that in practice). ] (]) 16:23, 18 December 2024 (UTC) | |||
:::::Well, it has always been clear to me, and I am not, and have never wished to be, an admin. But, then again, I am a bit strange in that I expect things to be as people say that they will be. ] (]) 18:34, 18 December 2024 (UTC) | |||
:::::Contested redirects going to AfD makes sense. Articles are redirected for the same reasons they're deleted and redirecting is probably the most common ATD. I've opened plenty of AfDs where my nom recommends a redirect instead of deletion, including when I've BLARed an article and had the BLAR reverted. ] (]/]) 18:38, 18 December 2024 (UTC) | |||
::::::If a redirect has already been discussed or attempted, and consensus can't be reached easily, then I've got no problem with AFD. What I don't want to see is no discussion, no bold redirects, nobody's even hinted about a merge, and now it's at AFD, when the problem could have been resolved through a less intense method. ] (]) 19:07, 18 December 2024 (UTC) | |||
== RfC: Voluntary RfA after resignation == | |||
::And? Using ''program'' all the time is more consistent than using ''program'' to refer to computer programs and ''programme'' to refer to other program(me)s; shall we purge ''programme'' from Misplaced Pages? For that matter, ''thru'' is more consistent with English spelling than ''through'', so shall we purge the latter? While I sympathize with those promoting greater consistency in language, even to the point of being a ] fan, Misplaced Pages is not a soapbox for linguistic ].<p>Physchim does bring up a good point, though. If we agree, as a policy, to follow IUPAC spellings, I'd be all behind that. —] (] • ]) 02:36, 5 May 2006 (UTC) | |||
<!-- ] 22:01, 19 January 2025 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1737324070}} | |||
{{rfc|policy|rfcid=5F11665}} | |||
Should ] be amended to: | |||
:::Can't agree there. 'Program' may have derived etymologically from 'programme', but it's a distinct meaning and can be regarded as a different word. ] 19:43, 6 May 2006 (UTC) | |||
* '''Option 1'''{{snd}}Require former administrators to request restoration of their tools at the ] (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions). | |||
* '''Option 2'''{{snd}}<s>Clarify</s> <ins>Maintain the status quo</ins> that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary ] (RfA). | |||
* '''Option 3'''{{snd}}Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted. | |||
: |
'''Background''': This issue arose in one ] and is currently being discussed in an ]. ] (]/]) 21:14, 15 December 2024 (UTC)<br /> | ||
'''Note''': There is an ongoing related discussion at {{slink|Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial}}.<br /> | |||
'''Note''': Option 2 was modified around 22:08, 15 December 2024 (UTC). | |||
'''Note''': Added option 3. ] (] • she/her) 22:12, 15 December 2024 (UTC) | |||
:{{block indent|em=1.6|1=<small>Notified: ], ], ], ], ]. ] (]/]) 21:19, 15 December 2024 (UTC)</small>}}<!-- Template:Notified --> | |||
*'''2''' per ]. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. ] ] 21:22, 15 December 2024 (UTC) | |||
*:Also fine with 3 ] ] 22:23, 15 December 2024 (UTC) | |||
* There is ongoing discussion about this at ]. ] (]) 21:24, 15 December 2024 (UTC) | |||
** '''2''', after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm ('''RRfA''')". ] (]) 14:49, 16 December 2024 (UTC) | |||
**:{{tq|best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"}} - I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. ] ] 11:05, 17 December 2024 (UTC) | |||
**::I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. ] (]) 14:23, 17 December 2024 (UTC) | |||
**::I think this would be a great idea if it differentiated against recall RfAs. ] (]) 18:37, 17 December 2024 (UTC) | |||
**:::If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. ] (]) 21:00, 17 December 2024 (UTC) | |||
* '''1''' ] ] 21:25, 15 December 2024 (UTC) | |||
*'''2''' I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. ] ] 21:27, 15 December 2024 (UTC) | |||
*'''2''' Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. ] ] 21:32, 15 December 2024 (UTC) | |||
*Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. ] (]) 21:38, 15 December 2024 (UTC) | |||
*'''2''' per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. ] (]) 21:42, 15 December 2024 (UTC) | |||
*:When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, ] (]) 21:44, 15 December 2024 (UTC) | |||
*::I've started that discussion as a subsection to the linked VPI discussion. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''1''' <ins>or '''3'''</ins>. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and ]). –] <small>(])</small> 21:44, 15 December 2024 (UTC) | |||
*:Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. ''If'' (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, ''then'' we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to ] (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. ] (]) 22:11, 15 December 2024 (UTC) | |||
*:Option 3 looks like a good compromise. I'd support that too. –] <small>(])</small> 22:15, 15 December 2024 (UTC) | |||
*:I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. ] (]) 22:20, 15 December 2024 (UTC) | |||
*'''2'''. We should not make it ''more difficult'' for administrators to be held accountable for their actions in the way they please. ]<sub>]<sub>]</sub></sub> (]/]) 22:00, 15 December 2024 (UTC) | |||
* Added '''option 3''' above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. ] (] • she/her) 22:11, 15 December 2024 (UTC) | |||
*'''2''' – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — ] ⚓ ] 22:12, 15 December 2024 (UTC) | |||
*:*I would also support option '''3''' if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — ] ⚓ ] 02:05, 16 December 2024 (UTC) | |||
*'''Option 3''' per leek. ] (]/]) 22:16, 15 December 2024 (UTC) | |||
*:A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. ] (]/]) 23:24, 16 December 2024 (UTC) | |||
*'''2''' as per {{noping|JJPMaster}}. Regards, --] (]) 22:20, 15 December 2024 (UTC) | |||
*'''Option 2''' (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that ] is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – ] 22:22, 15 December 2024 (UTC) | |||
*{{ec}} I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a ] for a year. —] 22:25, 15 December 2024 (UTC) | |||
*:@] hostile to what? ] (]) 22:26, 15 December 2024 (UTC) | |||
*'''2, distant second preference 3'''. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. ] and ] are always there. <span style="font-family:courier"> -- ]</span><sup class="nowrap">[]]</sup> <small>(])</small> 23:31, 15 December 2024 (UTC) | |||
* 1 would be my preference, but I don't think we need a specific rule for this. -- ] (]) 23:36, 15 December 2024 (UTC) | |||
*'''Option 1'''. <s>No second preference between 2 or 3.</s> As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. ] (]) 01:19, 16 December 2024 (UTC) | |||
*:After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. ] (]) 07:36, 16 December 2024 (UTC) | |||
*'''Option 2''', i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. ] (]) 01:24, 16 December 2024 (UTC) | |||
*'''Option 2''' nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. ] (]) 01:49, 16 December 2024 (UTC) | |||
* '''1 or 3''' per Novem Linguae. <span style="padding:2px 5px;border-radius:5px;font-family:Arial black;white-space:nowrap;vertical-align:-1px">] <span style=color:red>F</span> ]</span> 02:35, 16 December 2024 (UTC) | |||
*'''Option 3''': Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in ]. ] (]) 03:42, 16 December 2024 (UTC) | |||
*:Update: Option 2 seems to be the consensus and I also would be fine with that. ] (]) 18:10, 19 December 2024 (UTC) | |||
*'''Option 3''' per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. ] (]) 04:25, 16 December 2024 (UTC) | |||
*As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. <b>]]</b> (] • he/they) 04:31, 16 December 2024 (UTC) | |||
*:I've seen this assertion made multiple times now that {{tpq| requires a lot of time from the community}}, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? ] (]) 04:58, 16 December 2024 (UTC) | |||
*::I think there are a number of factors and people are summing it up as "time-wasting" or similar: | |||
*::# BN Is designed for this exact scenario. It's also clearly a less contentious process. | |||
*::# Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given. | |||
*::# If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks. | |||
*::# The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future. | |||
*::] (]) 07:31, 16 December 2024 (UTC) | |||
*::Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even ''realise that the RFA is not worth their time'', they have to: | |||
*::* Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words) | |||
*::* Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud" | |||
*::* Read through the comments and votes to see if any issues have been brought up (another ~1000 words) | |||
*::* None have | |||
*::* Realise your input is not necessary and this could have been done at BN | |||
*::This process will be repeated by hundreds of editors over the course of a week. ] ] 08:07, 16 December 2024 (UTC) | |||
*:::That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. ] (]) 13:15, 16 December 2024 (UTC) | |||
*::::Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. ] ] 11:21, 17 December 2024 (UTC) | |||
*:::::They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. ] (]) 18:47, 17 December 2024 (UTC) | |||
*::::::Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds ] ] 23:08, 17 December 2024 (UTC) | |||
*:::::::Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. ] (]) 12:52, 18 December 2024 (UTC) | |||
*::@] let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, ] (]) 22:49, 16 December 2024 (UTC) | |||
*:::Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have ''their'' time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. ] (]) 23:39, 16 December 2024 (UTC) | |||
*::::I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, ] (]) 03:34, 17 December 2024 (UTC) | |||
*:::::I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? ] (]) 09:08, 17 December 2024 (UTC) | |||
*'''Option 2''' I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. ] <sup>(]) </sup> 06:23, 16 December 2024 (UTC) | |||
*'''Option 2'''. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. ] </span>]] 07:24, 16 December 2024 (UTC) | |||
*Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. ] (]) 08:46, 16 December 2024 (UTC) | |||
* '''Option 1''' as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the ] ] 08:59, 16 December 2024 (UTC) | |||
* '''Option 2''', though I'd be for '''option 3''' too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. <span>♠] ]</span>♠ 09:13, 16 December 2024 (UTC) | |||
*'''Option 2''' because ] is well-established policy. Read ], which says quite clearly, {{tpq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is ''specifically permitted by policy''. It is bewildering to me. ] (]) 09:56, 16 December 2024 (UTC) | |||
*'''Option 2 & 3''' I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. ] (]) | :) | he/him | 10:45, 16 December 2024 (UTC) | |||
*'''Option 2''', for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. ] (]) 12:08, 16 December 2024 (UTC) | |||
*'''Option 1''' I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – ] <small>(])</small> 12:11, 16 December 2024 (UTC) | |||
*'''Option 2''' I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) {{pb}} Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ] (]) 12:33, 16 December 2024 (UTC) | |||
*'''Option 2''' It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on ] regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —] <span title="Canadian!" style="color:red">🍁</span> (] · ]) 15:24, 16 December 2024 (UTC) | |||
*'''Option 1''' - Option 2 is a waste of community time. - ] (]) 15:30, 16 December 2024 (UTC) | |||
*:Why? ] (]) 15:35, 16 December 2024 (UTC) | |||
*'''2''' is fine. '''Strong oppose''' to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —] (]) 16:59, 16 December 2024 (UTC) | |||
*'''2''', with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. ] (]) 17:56, 16 December 2024 (UTC) | |||
*'''2''' Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — ] <sup>]</sup> 18:03, 16 December 2024 (UTC) | |||
*'''Option 2''' If contributing to such an RFA is a waste of your time, just don't participate. ] (]) 18:43, 16 December 2024 (UTC) | |||
*:No individual is wasting their time participating. Instead the person asking for a re-rfa is ''using'' tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, ] (]) 22:55, 16 December 2024 (UTC) | |||
*::I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. ] (]) 23:15, 16 December 2024 (UTC) | |||
*:::I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. ] (]) 23:30, 16 December 2024 (UTC) | |||
*'''2'''. Maintain the ''status quo''. And stop worrying about a trivial non-problem. --] (]) 22:57, 16 December 2024 (UTC) | |||
*'''2'''. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it '''have''' to be in a recall petition? --] (]) 23:44, 16 December 2024 (UTC) | |||
*:What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, ] (]) 03:38, 17 December 2024 (UTC) | |||
*:Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --] (]) 23:58, 18 December 2024 (UTC) | |||
*'''No hard rule, but we should socially discourage confirmation RfAs''' There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?{{pb}}A lot of people have thrown around ] in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. ] <sup>]</sup>] 03:29, 17 December 2024 (UTC) | |||
*:And for what its worth, support '''Option 3''' because I'm generally a fan of putting more tools in people's toolboxes. ] <sup>]</sup>] 03:36, 17 December 2024 (UTC) | |||
*:{{tpq|In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.}} Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, ''is'' a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator. | |||
*:The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. ] (]) 09:16, 17 December 2024 (UTC) | |||
*:WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —] (]) 09:53, 17 December 2024 (UTC) | |||
*:@] Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN? | |||
*:As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. ] (]) 18:41, 19 December 2024 (UTC) | |||
*::Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. ] <sup>]</sup>] 16:49, 23 December 2024 (UTC) | |||
* '''Option 1'''. Will prevent the unnecessary drama trend we are seeing in the recent. – ] (]) 07:18, 17 December 2024 (UTC) | |||
* '''Option 2''' if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ] (]) 15:08, 17 December 2024 (UTC) | |||
* '''Option 3''' (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is <em>real and substantial</em> concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). ] (]) 17:26, 17 December 2024 (UTC) | |||
* '''Option 3 or Option 2'''. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. ] (]) 17:35, 17 December 2024 (UTC) | |||
*'''Option 2''': Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- ] <sup>] · ]</sup> 18:48, 17 December 2024 (UTC) | |||
*:I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. ] (]) 20:45, 17 December 2024 (UTC) | |||
*::Who are you disagreeing with? This RfC is about voluntary RRfA. -- ] <sup>] · ]</sup> 20:59, 17 December 2024 (UTC) | |||
*:::I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). ] (]) 12:56, 18 December 2024 (UTC) | |||
*::One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. ] (]) 21:06, 17 December 2024 (UTC) | |||
*:::I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.<br />There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into ] without good reason. ] (]) 12:57, 18 December 2024 (UTC) | |||
*::::Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. ] (]) 14:56, 18 December 2024 (UTC) | |||
*:::::Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. ] (]) 15:37, 18 December 2024 (UTC) | |||
*::::::Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. ] (]) 15:49, 18 December 2024 (UTC) | |||
*:::::::I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. ] (]) 16:29, 18 December 2024 (UTC) | |||
*'''Option 2'''. In fact, I'm inclined to ''encourage'' an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --] (]) 21:45, 17 December 2024 (UTC) | |||
*'''Option 2''' ] has said "{{tq|Former administrators may seek reinstatement of their privileges through RfA...}}" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. ]🐉(]) 22:14, 17 December 2024 (UTC) | |||
*'''Option 2'''. People who think it’s not a good use of their time always have the option to scroll past. ] (]) 01:41, 18 December 2024 (UTC) | |||
* '''2''' - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. ] <sup>]</sup> 14:00, 18 December 2024 (UTC) | |||
*'''Option 1'''. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. ] (]) 02:34, 19 December 2024 (UTC) | |||
*'''Option 2'''. If people want to choose to go through an RFA, who are we to stop them? ] (]) 10:25, 19 December 2024 (UTC) | |||
*'''Option 2''' (status quo/no changes) per ]. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. ] (<sup>]</sup>/<sub>]</sub>) 19:34, 19 December 2024 (UTC) | |||
*'''Option 2''' I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. ] (]) 02:28, 20 December 2024 (UTC) | |||
* '''Option 1''', per ], ], ], ], and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 04:28, 22 December 2024 (UTC) | |||
*:Except none of the re-RFAs complained about have been {{tpq|RfA invoked as a "Show me some love!" abuse of process}}, you're arguing against a strawman. ] (]) 11:41, 22 December 2024 (UTC) | |||
*::It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The {{em|actual effect}} (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 05:49, 23 December 2024 (UTC) | |||
*:::{{tqb|or because you've somehow confused yourself into think it needs to be done}} I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. ] (]) 15:37, 23 December 2024 (UTC) | |||
*'''Option 2'''. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins ''already'' choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates ], because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.{{pb}}On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general ''is'' a violation of ]. – ] (]) 14:46, 22 December 2024 (UTC) | |||
*:But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an ], is a feature or a bug. ] <sup>]</sup>] 23:01, 22 December 2024 (UTC) | |||
*::There have been lots of re-RFAs, historically. There were more common in the 2000s. ] in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. ] back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, ] (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial ]. That ended successful re-RFAS until 2019's ], which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the yet no-consensus ] in 2014 and the ] case in 2015, but those are very different than what we're talking about today. ] (]) 00:01, 23 December 2024 (UTC) | |||
*:::To add on to that, ] was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – ] (]) 16:34, 23 December 2024 (UTC) | |||
*::::Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. ] <sup>]</sup>] 16:47, 23 December 2024 (UTC) | |||
*'''Option 2 or 3''' per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. ] (]) 15:44, 23 December 2024 (UTC) | |||
*'''Option 2''' but for largely the reasons presented by CaptainEek. ''']''' (<small>aka</small> ] '''·''' ] '''·''' ]) 21:58, 23 December 2024 (UTC) | |||
*'''Option 2 (fine with better labeling)''' These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. ] (]) 22:36, 23 December 2024 (UTC) | |||
*'''Option 1''' because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. ] (]) 23:05, 23 December 2024 (UTC) | |||
*:If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. ] (]) 23:28, 23 December 2024 (UTC) | |||
*'''Option 2''' per QoH (or me? who knows...) ] • ] • ] 04:24, 27 December 2024 (UTC) | |||
===Discussion=== | |||
*{{re|Voorts}} If option 2 gets consensus how would this RfC change the wording {{tqq|Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.}} Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, ] (]) 21:43, 15 December 2024 (UTC) | |||
*I've re-opened this per ] on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. ] (]/]) 21:53, 15 December 2024 (UTC) | |||
*:The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — ] ⚓ ] 22:01, 15 December 2024 (UTC) | |||
*:@], can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – ] 22:02, 15 December 2024 (UTC) | |||
*::{{done}} ] (]/]) 22:07, 15 December 2024 (UTC) | |||
*'''Question''': May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. ] (]) 22:35, 15 December 2024 (UTC) | |||
*: The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- ] (]) 23:33, 15 December 2024 (UTC) | |||
*::But no volunteer is obligated to pat such candidates on the back. ] (]) 00:33, 16 December 2024 (UTC) | |||
*::: Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- ] (]) 01:52, 16 December 2024 (UTC) | |||
*::::Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. ] (]) 02:31, 16 December 2024 (UTC) | |||
*:::::Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. ] (]) 09:05, 16 December 2024 (UTC) | |||
*::::::I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. ] (]) 13:02, 16 December 2024 (UTC) | |||
*I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as ], we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. ] (]) 00:32, 16 December 2024 (UTC) | |||
*:Regarding option 3, ]. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. ] (]) 17:44, 17 December 2024 (UTC) | |||
* In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. ] (]) 10:26, 16 December 2024 (UTC) | |||
* What about an option 4 encouraging eligible candidates to go through BN? At the end of the ], add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. ] (]) 12:08, 16 December 2024 (UTC) | |||
*:Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be ''encouraging'' former administrators to demonstrate that they still have the approval of the community. ] (]) 12:16, 16 December 2024 (UTC) | |||
*:I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. ] (]) 15:04, 16 December 2024 (UTC) | |||
*::If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. ] (]) 15:30, 16 December 2024 (UTC) | |||
*:::Also a solid option, the question is whether people will actually do it. ] (]) 22:55, 16 December 2024 (UTC) | |||
*::::The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. ] (]) 23:20, 16 December 2024 (UTC) | |||
* This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal ]. ''']]''' 21:56, 16 December 2024 (UTC) | |||
* This apparent negative feeling evoked at an RFA for a former sysop ''everyone agrees is fully qualified and trusted'' certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA ''without first'' visiting BN. This comes in the wake of Worm That Turned's similar rerun. ] (]) 23:29, 16 December 2024 (UTC) | |||
*:Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. ] (]) 23:32, 16 December 2024 (UTC) | |||
*::No argument from me. I was a big Hog Farm backer way back when he was ''merely'' one of Misplaced Pages's best content contributors. ] (]) 12:10, 17 December 2024 (UTC) | |||
*All these mentions of editor time make me have to mention ] (TLDR: our understanding of how editor time works is dreadfully incomplete). ] <sup>]</sup>] 02:44, 17 December 2024 (UTC) | |||
*:I went looking for @]'s comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P ] <sup>]</sup>] 18:18, 17 December 2024 (UTC) | |||
*::Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. ] (]) 18:31, 17 December 2024 (UTC) | |||
*:::{{ping|CaptainEek}} You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the ''reason'' for the RRFAs. Regards, ] (]) 00:14, 22 December 2024 (UTC) | |||
*::::If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. ] (]) 01:05, 22 December 2024 (UTC) | |||
*::@] I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. ]<sup>TT</sup>(]) 10:28, 23 December 2024 (UTC) | |||
*:::@] my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote, {{tq|Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here}}. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. ] <sup>]</sup>] 16:59, 23 December 2024 (UTC) | |||
*::::I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) ] (]) 05:19, 24 December 2024 (UTC) | |||
*::I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. ] ] 12:04, 23 December 2024 (UTC) | |||
*I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". ] (] • ]) 12:40, 19 December 2024 (UTC) | |||
== Audio-video guidance == | |||
:::::There's an about that... — ] ] 10:38, 8 May 2006 (UTC) | |||
Hi there, | |||
Until those silly brits come to realize the futility of their efforts against the ] and adopt the clearly superior American english, Misplaced Pages will have to stick by our accepted practice of using both, but being consistent within a given article. ] 05:12, 12 May 2006 (UTC) | |||
Per the post I made a few weeks ago regarding use of ], I think that ] might be expanded to make mention of audio-video content, as most of the same principles apply (eg aesthetics, quality, relevance, placement). There are some additional concerns, for example, if audio or video renders a primary source, eg is a recording of PD music such as Bach or similar; or is a reading of a PD text, then there might be some source validation requirements (ie, the music or text should match the original, within sensible boundaries, eg Mozart or Bach pieces may not be easily replicated with original instrumentation, or at least this should not be a requirement. | |||
Maybe there should be a policy to delete comments like this and replace them with links to a page that says "this has been fought about forever, and we've already made up our minds about it." — ] 05:16, 12 May 2006 (UTC) | |||
So one option would be for a simple statement at ] that these guidelines normally apply to AV, or separate guidance for AV that explains that ] contains guidance that generally applies to AV. | |||
:Is calling British people "silly brits" permissible within Misplaced Pages rules? Don't forget that people in many other countries generally use British rather than American "english" ''(sic)'' ] 14:55, 14 May 2006 (UTC) | |||
Is the correct process to raise an RFC? And is that done at ], or ], or here, or where? ] ] 19:38, 16 December 2024 (UTC) | |||
::Given that the "silly world" uses International English, which is based on British English, and Misplaced Pages is a worldwide encyclopaedia, not an American one, if any language was to become the dominant one on WP it would have to be IE/BE. But as Americans constantly change spelling and dating all over the place to ''their'' minority version, WP has had to allow variants based on two rules; language relevant to the article topic and consistency. I'm afraid Raul's conviction that American English is "clearly superior" is only shared by Americans. Much of the rest of the world, tongue-in-cheek, dismiss AE as "illiterate English" or "lazy English" where people can't be bothered to use all the letters! lol. ]]\<sup><font color="blue">]</font></sup> 15:05, 14 May 2006 (UTC) | |||
:::The link that Raul provided was to a humour page on meta, and he's a Misplaced Pages Bureaucrat, who knows policy backwards. I think it'd be a good ] to make that he was joking. <span class="plainlinks"><font color="#0000FF">]</font><sub><font color="#2000E0">]</font><font color="#4000C0">]</font><font color="#6000A0">]</font><font color="#800080">]</font><font color="#A00060"></font><font color="#C00040">]</font><font color="#C00020"></font><font color="#FF0000"></font></span> 15:38, 14 May 2006 (UTC) | |||
:::I suspect you'll find that the majority of non-Americans here isn't quite so great as you'd think. While I do expect we ''are'' a minority, a) I would guess we're at least around a third of English-Wikipedians and b) I don't think anyone actually can say what the real number is with any useful degree of certainty, particularly if you count readers rather than just editors. And in any case, several prominent dialects of English (such as ], ], and ]) are somewhere in between American and British, so it's not reasonable to count them as completely on the Brit side. —] (] • ]) 02:06, 16 May 2006 (UTC) | |||
:I've posted a longer request for help explaining the gap at ]. It seems an RFC may not be needed but any advice would very much be appreciated. ] ] 20:28, 16 December 2024 (UTC) | |||
'''OMGSTFU'''<!-- This isn't directed at anyone in particular --> | |||
::I've ]. ] ] 22:50, 16 December 2024 (UTC) | |||
Language. Changes. There are ] ] variants of the English language and none is superior to or "more correct" than the others. Get over it. We've been through this a million times and people keep bringing it up again. The whole topic is just an incivility magnet and waste of time. ]. People who suggest various standardizations or forks or complex technical workarounds should just be directed there and the discussion halted. It can certainly be tweaked and cleaned up, but is there ''any'' chance that the fundamental idea behind this policy will ever be changed? I don't see any... — ] 16:42, 15 May 2006 (UTC) | |||
:Amen. —] (] • ]) 02:06, 16 May 2006 (UTC) | |||
:It would be difficult for me to agree more with Omegatron. Admittedly, I'm enough of a nitpicker that one some level it kind of bothers me to see "program" in one article and "programme" in another, but honestly, as long as it's spelled consistently one way within the same article, I really don't care. I don't think it's particularly important, especially when compared to the other, more important spelling problems Misplaced Pages faces on a daily basis. Generally, I think the current policy does the job pretty well. I'm sure it can be improved on, just like anything else, but in that case I'd like to see proposals for specific improvements instead of generic and draconian "everyone must spell everything in <one national strain of English>" pronouncements -- they only serve to annoy people and, in some of them, fires up feelings of national pride that make them more than willing to settle the difference between "color" and "colour" with sharp implements in some kind of a Thunderdomian "two spellings enter, one spelling leaves" battle to the death. Frankly, I think we can do without that crap, because it's tiresome and causes bad blood and, perhaps most importantly, doesn't actually improve Misplaced Pages at all. I mean, at least the endless arguing about whether notability should be a part of the criteria for inclusion has a huge potential impact on Misplaced Pages's content... -- ] 13:38, 20 May 2006 (UTC) | |||
== Policy proposal: Establishment of research groups to edit articles == | |||
::American spelling is on the internet, because the internet is an American invention, and is still Americentric. Therefore American type spelling wil get more hits. As for articles, contributors should use English spelling for English articles, and American spelling for American articles. If you see incorrect spelling, say harbour, in an American article, then change it. ] 22:03, 20 May 2006 (UTC) | |||
{{Archive top | |||
|status = withdrawn | |||
|result = My policy proposal was too controversial in its original form and poorly thought out in its revision. ] (]) 23:48, 17 December 2024 (UTC) | |||
}} | |||
In order to have more reliable and unbiased articles and make Misplaced Pages fully reliable, | |||
:::The main place that the British spellings seem to turn up in America is when people are trying to be pompous and pretentious; various academics insist on using such spellings to try to sound more "cultured", and real estate developers name their developments things like "Harbour Centre" when they probably are nowhere near any harbo(u)r and are far from the cent(er/re) of anything. ] 13:04, 26 May 2006 (UTC) | |||
I believe it necessary for articles and topics to be created and handled by '''bodies of editors called "Research groups"''', focusing on researching about topics and articles to ensure Misplaced Pages articles are as accurate, neutral, and unbiased as possible. | |||
<s>I also '''propose''' that editing articles will be reserved to their '''respective research groups''' and creating articles about a topic can only be done by said groups, with non-members being able to propose changes and articles via RFCs and whatnot. To join a research group, one must complete ''thorough training'' in the following areas | |||
Wallie's hit it on the head. Quite right.--] 22:09, 20 May 2006 (UTC) | |||
*Bias reduction and neutrality training. | |||
*Finding reliable sources and comparing sources. | |||
*Professional research. | |||
*Lessons about the topic. | |||
*Misplaced Pages policy. | |||
*Any other applicable areas</s> | |||
This policy would also reduce vandalism and guideline violations across Misplaced Pages, making situations easier to handle. | |||
:I think you meant ] and ] hit it. Having said that, I go with whoever first wrote the article. If its in American English I'll write like that, if its English² I'll write like that. - ] 22:36, 20 May 2006 (UTC) | |||
<s>Since Misplaced Pages is a widely-used source, ''it is time to move away from the current decentralized approach'' where just about anyone (no matter how inexperienced and biased) can edit, which has resulted in article bias, vandalism, unreliable sources, poor editing, sockpuppet accounts, edit wars, controversies, and a host of other problems.</s> | |||
As a side point. I believe aluminium is also the most widely used form in other languages. (at the very least, Dutch, German and even FRENCH, come to mind). - ] 15:56, 22 May 2006 (UTC) | |||
A Research-Group-based editing Misplaced Pages will be far more stable and solve many of our persistent problems. ] (]) 15:28, 17 December 2024 (UTC) | |||
Another side point: Punctuation is also different. In the States, commas and periods go inside the quotation marks -- "like this," or "like this." In GB, they go outside. Colons and semicolons go outside the quote marks in both systems. ] 04:21, 27 May 2006 (UTC) | |||
:So something more like ]? ] ] 16:11, 17 December 2024 (UTC) | |||
:So basically we would deprecate half our policies and guidelines? Honestly trying to be nice but this is a terrible idea ] (]) 16:14, 17 December 2024 (UTC) | |||
:Your proposal describes something that would not be Misplaced Pages at all (remember, we are the free encyclopaedia that anyone can edit), so is better suited for a ]. Usually proposals like yours (with editing only permitted to small numbers of "trusted" people) only work for small projects, if at all (I expect you would have to pay people to consider going through the "research group" training; it certainly does not sound attractive at all to me as a volunteer). You will need tens of thousands of editors to have any chance at not becoming outdated immediately. —] (]) 16:56, 17 December 2024 (UTC) | |||
:{{tq|it is time to move away from the current decentralized approach where just about anyone can edit}} - this proposal is dead on arrival - I support a SNOW close. ] ] 17:30, 17 December 2024 (UTC) | |||
::Congratulations! We're completely rejecting Misplaced Pages's open content, its editors' direct contributions (henceforth to be filtered through a new Misplaced Pages Politburo), with the basic principles of Misplaced Pages's founders soundly rejected. "Research group training" says it all. ] (]) 18:15, 17 December 2024 (UTC) | |||
:Respectfully, I think this is a horrible idea. The decentralized model is what the entire site was built upon. It is even one of the first things advertised on the Main Page! If we got rid of it, the site wouldn't truly be Misplaced Pages anymore. ] (]) 18:20, 17 December 2024 (UTC) | |||
::I mean, I think Misplaced Pages is big enough to drop the "free encyclopaedia" bit and attempt a nupedia-like approach. However, I digress and have dropped that clause in my proposal. ] (]) 20:06, 17 December 2024 (UTC) | |||
:Few modifications of my proposal based on feedback: | |||
:*Keep the decentralized model. | |||
:*Drop the extensive training clause. No extensive training will be required. | |||
:*Individuals are expected to learn about the topic before editing. | |||
:*It will be necessary to join research groups (i.e., groups made of editors collaborating on the topic) to better learn about the topic and better contributing to it. | |||
:*Editing of ''various'' articles, including election articles, will require an extended-confirmed status. | |||
:] (]) 20:04, 17 December 2024 (UTC) | |||
::So these groups are basically ] but more restrictive? ] <sup>(]) (])</sup> 20:17, 17 December 2024 (UTC) | |||
===Modified proposal: Research panels between editors and greater article protections=== | |||
:Note though, that Misplaced Pages uses neither British nor American ''literary'' punctuation systems. Rather, WP uses a technical or logical system (I forget it's exact name): punctuation is within the quotes if it is part of the quote, otherwise it is outside. Thus, what is inside the quote is ''exactly'' what appears in the source. — ] ] 05:28, 27 May 2006 (UTC) | |||
I got your feedback on this policy proposal and decided to overhaul the proposal based to maintain the free nature of Misplaced Pages and strong editor collaboration while ensuring accuracy and reliability. | |||
In order for Misplaced Pages's articles to be more reliable and less biased and to make Misplaced Pages a more reliable site, I believe that there should be a mandatory creation of '''Research panels''', topic collaborations between editors focused on gathering information, comparing sources, establishing editing policies and guidelines in accordance with general Misplaced Pages rules, and ensuring editing guidelines. Formation of research panels will be necessary in every topic. | |||
== Metric versus American/] measurements == | |||
:See also ] --] 22:36, 17 May 2006 (UTC) | |||
*The editing model will remain decentralized. However, Research panels will determine policies based on consensus for their respective topics. | |||
Since this language's version of Misplaced Pages is the only to cover a country which does not use SI measurements (and in fact, a supermajority of native english speakers do not), it should be the policy of Misplaced Pages for all articles to include both metric and American units in all pages where measurements are used. If there is a page lacking in this, it should be noted by a template. ] 07:14, 9 May 2006 (UTC) | |||
*Individuals can join a research group freely. However, individuals are expected to follow the policies and protocols set forth by the research group. | |||
:I'm inclined to very reluctantly agree. As horrible as the customary units are, Misplaced Pages can't change popular usage. On the other hand, I think that there are large categories of articles that do not need customary units (even if this proposal were implemented), such as those in astronomy. ] 07:22, 9 May 2006 (UTC) | |||
*Individuals are expected to learn about the topic before editing. | |||
:I'm fine with people adding adding a conversion, but I don't want a new template, as it would just be needless clutter. People wishing to add conversions, can easily do so themselves. I don't want to see hundreds (even thousands) of pages tagged with a new template. --] 07:38, 9 May 2006 (UTC) | |||
*It will be necessary to join research panels to better learn about the topic and better contribute to it. | |||
::To clarify what I said above, I'm agreeing with the policy to include both units, not to have a template. ] 07:41, 9 May 2006 (UTC) | |||
*''Various'' controversial articles, including election articles, will receive special protection or extended-confirmed protection upon request from the respectively Research panel. | |||
Since when did the US not use metric units? What is habitual and what is actual policy are two different things entirely, and the US government deisgnated the SI system as the preferred one 18 full years ago. Further, go right ahead and tell me the speed of light in feet per second. We all know it moves at 300.000km/s, meaning 300.000.000m/s, meaning 300.000.000.000mm/s. Now, equally swiftly, without a calculator, tell me what this is in miles/sec, yards/sec, feet/sec and finally, let's not forget the smallest (and my, how accurate it is too) unit available; inches/sec. To put some more emphasis on the great accuracy of the CUs, how many inches is an average ]? Over a span of 3 unit denominators, it's 12,5mm, 1,25 cm, and 0,125m. How many inches, feet and yards is this? | |||
My points are; 1: If you want to trawl all Misplaced Pages articles for occurances of units not provided in customary units, go right ahead. However, the sheer volume of Misplaced Pages, and the complete lack of logic in finding the lesser unit of what you currently have, means you've got a nice life's work cut out for you. Enjoy. 2: In an encyclopedia, accuracy - not the habits and quirks of one user group (which by the way happens to claim majority (which is equally false, as you clearly know, and that cleverly adjusting your statistics to show ''native'' English-speakers won't change the fact that most of the world still uses BrE, having been, as it were, under British rule or influence for longer than the US has been a country.)) - should be priority. --] 08:09, 9 May 2006 (UTC) | |||
:::Actually it's ]. ] 17:31, 23 May 2006 (UTC) | |||
Research Panels will allow people to better understand the topic while integrating cooperation efforts to find the most accurate information and create the best possible articles. | |||
The term "American units" is a misnomer. Most British people still use them in everyday usage as well, despite Britain being "officially" (and generally reluctantly) metric. And many things in the UK, including our roadsigns, are still officially in imperial units (it's actually illegal to use only metric units on roadsigns), so let's not have any false claims that it's only the United States that uses these units. -- ] 09:30, 9 May 2006 (UTC) | |||
:Does Britan use ]? I realize they're similar, if not identical, but still. Starting 3 years from now, any product marked with non-SI units will be banned from import into the EU. That ought to help. --] 09:37, 9 May 2006 (UTC) | |||
::The one thing Britain is is confused. I measure distance in mm, cm, m and ''mile''... However, back to the point - Britain is very much a metric country, imperial units have not been taught at school for decades. The mile and pint really are the last remaining official uses... | |||
:::Imperial units are taught in British schools, including conversions between metric and imperial. I know because I have taught it, and it is still on the National Curriculum. ] 21:50, 17 May 2006 (UTC) | |||
::However the Imperial units in Britain are different to "American units". Our ] isn't your pint (20 vs 16 fl oz), our ] isn't yours, our ] isn't yours... Get the point? If not take a read of ]/] 09:47, 9 May 2006 (UTC) | |||
:::Britain, as I said, is officially metric. But in practice it is not, even for those (like myself) who were taught metric units at school, but wouldn't dream of using them unless forced (the only people I've ever heard using metric measurements in day-to-day life have been scientists). Those who claim otherwise are usually evangelical (and rather delusional) metric fans who don't want to accept that their beloved system isn't popular. Also note that Imperial measurements of length, area and basic weight (the ounce and pound) are identical to the American. My main point, however, was not to claim that British and American systems were identical (although some parts of it are), but to counter the arguments that the United States is the only country that retains non-metric measurements. -- ] 10:03, 9 May 2006 (UTC) | |||
--] (]) 20:36, 17 December 2024 (UTC) | |||
::::Well I guess It's personal/generation issue. I'd just like to make it known we're not all imperial unit monkeys ;) As for being evangelical or delusional, I personally couldn't care - I'd measure my height in feet and inches; my weigth in kilos; the distance to my house in miles; the size of a room to the nearest unit (e.g. 8ft x 6m); liquids in litres, unless I'm drinking a pint; and when shopping metric... /] 10:21, 9 May 2006 (UTC) | |||
:This still feels like ] with more rules. ] <sup>(]) (])</sup> 20:39, 17 December 2024 (UTC) | |||
::Ah, then we can use that and implement it more strongly across every topic. We do need some sort of organization, though. ] (]) 20:41, 17 December 2024 (UTC) | |||
:This modified proposal is sloppy at worst and doubt it will get any approval. However, I think y'all should take whatever good ideas are there (if any) and also create spaces or something like that for greater learning and collaboration among editors to resolve conflicts and biases. ] (]) 20:39, 17 December 2024 (UTC) | |||
:"Individuals are expected to learn about topic before editing"+"It will be necessary to join research panels to better learn about the topic" seems to add up to "Users are expected to be members of research panels before editing topics in that realm." | |||
:So if I look at an article about "semiotics" (about which I know nothing) I am not allowed to correct, say, some ] failures, some incorrect header capitalization, and a misspelling of Charles Schulz's last name until I train myself in semiotics and join the Official Misplaced Pages Semiotics Fun Team? -- ] (]) 20:48, 17 December 2024 (UTC) | |||
::The whole training clause has been dropped and you could freely join the panels, so you would be allowed to do all that. ] (]) 20:50, 17 December 2024 (UTC) | |||
::Also, the whole editing thing only applies to major edits, not minor ones. | |||
::Either way, this isn't my best work. The first proposal was essentially nupedia revivalism and the modification is a sloppy attempt at salvaging it based on the original essence of investigating about a topic further to put out the most accurate information (as is necessary in the ] article) ] (]) 20:53, 17 December 2024 (UTC) | |||
::I understand that I would be allowed to jump through whatever hoops you erect in order to be able to correct misspellings, but that seems an effective way to discourage me from improving the encyclopedia and getting it in line with general standards. (I am also unconvinced that this will reduce bias; it would seem to help solidify a smaller group of editors on various topics who are likely to be driven, quite possibly by bias, to go through the effort which you're adding.) -- ] (]) 20:54, 17 December 2024 (UTC) | |||
:::<small>(Note, that was written before you made the "minor edit" change.) -- ] (]) 20:55, 17 December 2024 (UTC)</small> | |||
:::Regardless, I am '''dropping''' my proposal because it is poorly written and will probably not be helpful at all. ] (]) 21:21, 17 December 2024 (UTC) | |||
{{Archive bottom}} | |||
::::I'm British. I accept that my beloved metric system (?) isn't popular in Britain. I think in the metric system. If I add some figures to an article, I'm not going to bother with non-metric units. If somebody wants to add a non-metric "translation", that won't bother me. If on the other hand somebody wants to give priority to his or her beloved antique metrology, thereby relegating my own beloved metrology to parentheses, I shall get annoyed. But of course I mustn't show my annoyance, must I? -- ] 11:11, 9 May 2006 (UTC) | |||
== Is the "above 8000 words = split" an absolute rule? == | |||
:::::I'm always amused when people describe the Imperial system as an antique while favouring a system developed in the 18th century! -- ] 11:38, 9 May 2006 (UTC) | |||
I am referring to this chart found on ]: | |||
::::::Point taken. Me, I'm amused when the ''more'' antique mishmash is referred to as the "Imperial system". But then I reflect that there's some truth to it, as we're all under ''pax (?) Americana'' these days. -- ] 12:54, 9 May 2006 (UTC) | |||
{| class="wikitable" cellspacing="0" cellpadding="4" | |||
:::::Blah, common sense should prevail. We don't need Imperial units (or US Standard for that matter) in all articles, anything science related should be in SI, anything else should be in whatever people decide on the page. I'm 22 and from the UK, and I still use feet/inches, pints, stone etc. The decision of what to include should be worked out on article talk pages, but I would strongly object to a blanket policy of having Imperial or US Standard in parentheses. - ] 22:49, 17 May 2006 (UTC) | |||
|- | |||
! Word count | |||
! scope="col" | What to do | |||
|- | |||
| > 15,000 words || Almost certainly should be divided or trimmed. | |||
|- | |||
| > 9,000 words || Probably should be divided or trimmed, though the scope of a topic can sometimes justify the added reading material. | |||
|- | |||
| > 8,000 words || May need to be divided or trimmed; likelihood goes up with size. | |||
|- | |||
| < 6,000 words || Length alone does not justify division or trimming. | |||
|- | |||
| < 150 words || If an article or list has remained this size for over two months, consider merging it with a related article.<br /> Alternatively, the article could be expanded; see ]. | |||
|} | |||
I have seen a few instances where, an editor will raise the issue that an article is too large at, say, 7500 words or 8100 words. We have multiple history pages (and medical/psychology pages) with well over 11,000+ words, even some with over 16000. Where does one draw the line? It seems like Misplaced Pages leaves it to the discretion of the editor after about 8000 words. ] (]) 07:21, 18 December 2024 (UTC) | |||
Units should be dealt with on the WikiProject level. A universal policy could never account for all the idiosyncracies you get in specialized fields. ] 10:08, 9 May 2006 (UTC) | |||
:Looking at the table, it's obvious that "above 8000 words=Split" is not "an absolute rule". I promise you that if it were, that table would say something that sounded remarkably like "if the article is above 8,000 words, then it absolutely must be split". | |||
I think all articles should use metric. The English WP is available to billions of English speakers around the world, often speaking English as a second language. SI is international - that is its point. If people would like to add their own local units too (particularly when referring to local issues), I won't mind that. ] 12:33, 9 May 2006 (UTC) | |||
:Additionally, we have ]. | |||
:Where one draws the line is: In a place that makes sense for the topic of that specific article, having thoughtfully considered all the facts and circumstances that apply to that unique article. ] (]) 07:41, 18 December 2024 (UTC) | |||
:There was a lengthy discussion at ] about the size guidance, for the record. Splitting pages is a lot of work and not everyone thinks that spreading stuff over multiple pages is better for readers than having in one big page. ] (]) 08:19, 18 December 2024 (UTC) | |||
::In addition to the above, what matters for the technical aspects of article size is not the number of words but the number of bytes. Word count can only ever be an approximation of that as the length of the words used matters ("a" is 1 byte, "comprehensive" is 13), the number and size of included media matters very significantly more. ] (]) 09:20, 18 December 2024 (UTC) | |||
:::I think ] is a bigger technical challenge for long articles. The more templates, and the more complicated templates, the more likely you are to need to split for technical reasons. ] needs a split in part due to PEIS reasons. ] (]) 18:43, 18 December 2024 (UTC) | |||
:What's more, there's nothing even in the excerpt here that would purport an absolute guideline. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 09:44, 18 December 2024 (UTC) | |||
::It isn't an absolute rule, but ''usually'' an article having an extremely long high word count is evidence of a bigger problem with ] -- that it's too dense or detailed for a reader to use it as a first-port-of-call summary. As such, ''usually'', it's a wise move to create daughter articles for the detailed material, and strim it down to its essentials in the main article; this improves the readability of the main article and allows interested readers to follow up into the nitty-gritty. As {{u|Jo-Jo Eumerus}} rightly says above, though, there's not really such thing as an absolute rule in this place. '']'' <sup>]·]</sup> 09:59, 18 December 2024 (UTC) | |||
:::What we now know is that many readers are looking for specific information, with few reading from top to bottom, but the search engines send them to the mother article even when a more specific daughter article exists. So the first port of call needs to be the most highly detailed. The advice in ] is therefore considered well intentioned but obsolete; stripping the mother article and pushing information down to the daughter articles defeats our whole purpose in providing information. ] ] 11:31, 22 December 2024 (UTC) | |||
::::When you say “we know…”, “is considered” and similar, are you pointing to specific previous discussions, RfCs etc on this matter? “In the wild”, as it were, I still see these size limits regularly invoked, even if the conversation rarely ends at them. '']'' <sup>]·]</sup> 09:17, 23 December 2024 (UTC) | |||
: Rather than draw a line, I'd rather just remove that chart. Can't imagine why a suite of concrete word counts and procedures would ever be very helpful. — <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 16:29, 18 December 2024 (UTC) | |||
::It absolutely makes sense to say at what page sizes that editors should start considering other options, as well as where splitting can be absolutely unnecessary. Nothing wrong with the table as long as it's clear those aren't hard or fast rules. ] (]) 16:41, 18 December 2024 (UTC) | |||
:::Agreed, I find it helpful because it helps me remember what is generally too long for mobile users (I understand that mobile is generally a blindspot for us as editors because the vast majority of us don't edit on mobile but most of the readers are actually on mobile) ] (]) 16:49, 18 December 2024 (UTC) | |||
::::I also believe that the chart is helpful. ] (]) 17:29, 18 December 2024 (UTC) | |||
:There don't seem to be any absolute rules laid out there... Even "Almost certainly" is qualified not an absolute rule. ] (]) 16:38, 18 December 2024 (UTC) | |||
*The optimal article size varies quite a lot, actually. Key things we need to consider include: | |||
*:The likely readership. Someone who's looking up ] probably has time to read something long and thoughtful. Someone who's looking up ] might need basic facts, in simple words, very fast. | |||
*:The cognitive load associated with the topic. ] is (very) long but easy to understand; ] is much shorter, but I bet it takes you longer to read, unless you have unusual expertise in mathematics. | |||
:This is not the kind of thing on which we can produce simplistic guidance.—] <small>]/]</small> 17:03, 18 December 2024 (UTC) | |||
:Misplaced Pages leaves it to the discretion of editors far far before 8,000 words. We have thousands of single sentence articles to attest to this. The average article is less than 700 words. ] (]) 17:15, 18 December 2024 (UTC) | |||
::The median article length is around 350 words, and the mean is 750. About one in 75 articles has more than 6,000 words. ] (]) 17:34, 18 December 2024 (UTC) | |||
:::You'll have to take the specifics up with ], although that ballpark range sounds the same. ] (]) 18:45, 18 December 2024 (UTC) | |||
:No. ] (]) 18:52, 18 December 2024 (UTC) | |||
*I've always felt that the kB of readable prose was a better metric for page size (such as is produced by various page size gadgets). Turns out, bigger words take longer to read than shorter words :P Doing it just by wordcount encourages a certain verbosity. For me, my rule of thumb has always aimed to keep big articles under 100kb readable prose. But there's no hard and fast rule, and there shouldn't be. ] <sup>]</sup>] 23:08, 22 December 2024 (UTC) | |||
*:If I'm being honest, what might be the best metric is starting at the top and lightly hammering the {{key|Page Down}} key for a bit. If I groan before reaching the References section, it's too long. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 23:10, 22 December 2024 (UTC) | |||
*::For example, results were heavily discouraging for ] until recently; ] at the article's uncaring girth—thanks Nikki et al.! <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 23:18, 22 December 2024 (UTC) | |||
*:::26,000 words is {{tomats|26000}}. Another way to look at that table is by saying that if it's more than half the length of a book, it's pushing past being "an article" and edging up towards being "a book". | |||
*:::Or you can look at it in terms of how many minutes reading the whole thing would take. There's quite a bit of variation, but for easy math, 300 words per minute means that a 15,000-word-long article would take 50 minutes to read, which almost certainly exceeds the interest and attention span of most readers. ] (]) 00:35, 23 December 2024 (UTC) | |||
*::::I think the most fundamental scalar isn't quite reading time or even visual size, but structural complexity—for an online encyclopedia article, being overlong expresses itself in my increasing inability to navigate an article comfortably to read or locate what I want, or to understand the structure of the scope covered by it at a glance. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 00:38, 23 December 2024 (UTC) | |||
*:::::Come to think of it, one thing that makes an article feel longer than its word count is if its sections, media, and other landmarks have been laid out in a careless or unnatural way. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 00:48, 23 December 2024 (UTC) | |||
*'''No.''' And this was rather a pointless, self-answering question in the first place, not something for a VP thread. The answer to the posed question of 'Is the "above 8000 words=Split" an absolute rule?' is obviously "no", both by observing actual WP community practice, and just by reading the table the OP quoted: {{tq|> 8,000 words — May need to be divided or trimmed; likelihood goes up with size}}. Is anyone here actually confused into believing that A) "must" and "may" are synonymous, or B) that a guideline, to which reasonable exceptions sometimes apply, is somehow a legal-level policy that must be obeyed at all costs? In reality, there is never any hurry to split a large article, and doing it properly often involves a tremendous amount of work, involving both repair of citations (sometimes in great detail), and resummarizing the background context in the side article while also resummarizing the side-matter in ] style within the main article (and doing them distinctly enough that the results are not obnoxiously repetitive if the reader moves between the articles). Doing a good job of this can take several days up to a month or longer of tightly focused work, depending on the detail level of the material, the number citations, etc. It is not trivial, we're all volunteers here, and our readers are not going keel over and die if they reach a detailed article that's a bit longer than they were expecting or would prefer. Ultimately, an article that is ginormous {{em|usually}} should split, but there is no deadline, and it needs to be done properly (plus there are often conceptually different ways to go about it from a content-flow perspective, and that might require some consensus discussion). <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:22, 23 December 2024 (UTC) | |||
*:Ever since WAID reminded me of it, I've thought we should maybe link somewhere as a lemma. <span style="border-radius:2px;padding:3px;background:#1E816F">]<span style="color:#fff"> ‥ </span>]</span> 01:36, 23 December 2024 (UTC) | |||
*::I think I linked it once in ], years ago, and someone objected. I didn't follow up to see whether the objecting editor is one of the handful who think that ''should'' is a more polite and/or IAR-compliant way to say ''must'', but as that's a fairly uncommon POV among editors, it probably wasn't. ] (]) 05:41, 23 December 2024 (UTC) | |||
*:::The linked document pushes very hard on ''should'', "here may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed" is not a low bar. It sounds much like must except when IAR. ] (]) 09:39, 23 December 2024 (UTC) | |||
== ] == | |||
:How about: | |||
::Where the original form of a measurement is known it should be specified first with conversions in brackets. Where the original form is not known and there are no other overriding considerations (e.g. local conventions in local articles) metric should be placed first with the conversions in brackets. Ambiguous units like the ton and the gallon should be avoided where possible and when they are included they should always be clarified. | |||
: -- ] 13:56, 9 May 2006 (UTC) | |||
: ] 13:56, 9 May 2006 (UTC) | |||
I want to propose ] as a new guideline with an RfC. I'm starting this thread in case any outsiders to this area want to provide input or have questions. For context, the goal of this manual of style is to get agreement on broad principles to make editing easier in this topic area. As an example, ] is dealing with inconsistent use of the word "massacre" specifically, which has caused much arguing over whether there is a double standard, so this guideline makes the standards we should be using explicit. <span class="nowrap">] (]) <small>(please ] me on reply)</small></span> 06:38, 21 December 2024 (UTC) | |||
:All articles should use both metric and American units (in the case of ambiguous units such as gallons, clearly designated as being American if you can't use a more commonly-used non-SI unit). This provides maximum information with minimum clutter. Many of our readers are not familiar enough with the metric system to understand units given only in metric, and many (probably most) are not familiar enough with American units to understand units given only in American units. What governments say is completely irrelevant; it's our readers that we're here to serve.<p>As for specialized fields, Misplaced Pages serves a general audience, not just specialists. Even if American physicists always use the metric system for physics, other Americans/Brits/Canadians/etc. (almost no former British colonies are fully converted to SI) will also want to read and understand the article.<p>The only exception to this rule is when the units involved are so ridiculously beyond what we use in everyday life that normal units are insufficient or the differences are negligible; our readers don't need to be told that 1.41679 × 10<sup>32</sup> K equals 2.55022 × 10<sup>32</sup> °F, or that 130 light years equals 7.6427 × 10<sup>14</sup> miles—nothing is gained in comprehensibility from that. But the density of ], that's something that should be in both metric and American/imperial units. —] (] • ]) 03:59, 10 May 2006 (UTC) | |||
:Are all four points currently included based on previous conversations, or are any novel to this proposal? On the broader framing, I wouldn't create the guideline solely talking about NPOV, it is helpful to have guidelines to help describe encyclopaedic style if nothing else. On the example of massacre, I'm not up to date with the recent or ongoing discussions, but I'm surprised there's no mention in the draft of ], as this seems a classic case. ] (]) 07:00, 21 December 2024 (UTC) | |||
Two points. I reinforce the plea that if a unit like the pint (which differs between the US and other countries) is used, it should be explained which is meant, ideally with a conversion into the other sort. And there is often more than one metric unit. The density of mercury is about 13.6 grammes (grams?) per cubic centimetre in cgs units and 13,600 kilogrammes per cubic metre in SI units; probably, most people would prefer the former, although scientists usually use SI. The official unit astronomers use to measure distances to stars is the parsec, although common usage prefers the light year; neither is strictly an SI unit. ] 19:38, 10 May 2006 (UTC) | |||
::{{re|Chipmunkdavis}} The settlements language comes from previous discussions/consensus, likewise with the West Bank naming conventions section. The categorization language comes from a suggestion on the talk page of the draft. | |||
::The "massacre" part is a novel part of the proposal. I would say that it seems like an obvious ], but there's many RMs in the topic area in which editors use their own definition of the term "massacre" (like civilian death toll, mechanism of killing, see ] for a list of 24 RMs about the term "massacre"), and argue about whether or not the event meets that standard. I want to make it easier for editors to disengage by citing this guideline, instead of getting into lengthy arguments over points that don't contribute much to consensus (which is what happens now). | |||
::The reason the guideline is short is because I believe it will give it the best chance of passing right now. In the future, I'd like to get consensus to add more points to this guideline. I thought proposing a lengthy guideline upfront would create a scenario in which everyone at an RfC agrees a guideline would benefit the area, but individual editors have a small issue that prevents them from supporting ''this specific version''. <span class="nowrap">] (]) <small>(please ] me on reply)</small></span> 07:50, 21 December 2024 (UTC) | |||
:::At a quick look, it seems most pages in that statistics page were not moved to a title with "massacre"? Seems sensible to collate previous discussions together, if massacres is new (as new as something discussed 24 times can be?) there might need to confirm that one. ] (]) 08:20, 21 December 2024 (UTC) | |||
:One issue with the massacre bit is that should be part of an AT naming convention, not in the MOS. Likewise, appropriate categorizations usually aren't in the MOS. ] (]/]) 15:27, 21 December 2024 (UTC) | |||
== I (+other people) co-authored a draft article, but its submission was declined == | |||
I'd love to use American or Imperial units but they get so complex once you get beyond basic stuff - working with measurements trying to work out if they are eights or twelths, long or short tons, and how many pints to the quart anyway? Not to mention fathams and furlongs, bushels and chains. But I'm an adult, and most of the time I've got a fair idea what people are talking about. If I want to know what the exact converion is, I'll pull out the calculator. I can cope with whatever anyone writes. One point though. English is the international language with probably more ESL speakers than native speakers. And SI is the international system of measurement. --] 14:35, 11 May 2006 (UTC) | |||
{{atop | |||
| result = @]: This is not the place to ask questions about your drafts. You should ask @] on his talk page. ] (]/]) 15:29, 22 December 2024 (UTC) | |||
}} | |||
supposedly due to: "This submission is not adequately supported by reliable sources". | |||
I think the preceding discussion is well meaning, but short on facts. When people say things like "most of our readers" I wonder where the numbers are to back up these claims. Note that we have guidelines on this topic at ] and ] which say sometimes SI units are mandatory and that conversions should not be removed. If you want to add conversions to articles, I suggest adding them as you find them or organizing a wikiproject to do so. -- ] 15:39, 11 May 2006 (UTC) | |||
It seems to me, that there are planety of good references now. | |||
I want to understand, what is missing - and- how it can be improved. | |||
A side note: there are many published articles in English Misplaced Pages, which are of much lower quality, than this draft: https://en.wikipedia.org/Draft:Maternity_capital ] (]) 15:19, 22 December 2024 (UTC) | |||
{{abot}} | |||
== Is it time to write an official policy on usage of AI? == | |||
Just two points to add: all packaged goods in Britain are labelled in metric, although loose goods can be ordered in Imperial. I drink pints of beer but buy milk in litres, I think in Fahrenheit, but everybody else I know thinks in Celsius. Secondly, as Michal Johnson implies, Wiki En probably has a large ESL readership. In fact judging by many of the contributions, Wiki En is frequently written by non-native speakers (look at any article concerning a non-English speaking country). Recipe books can manage multiple measurements, why limit Wiki En to one continent? | |||
{{Moved discussion from|Misplaced Pages:Administrators' noticeboard#Is it time to write an official policy on usage of AI?| ] (]/]) 03:20, 25 December 2024 (UTC)}} | |||
I've been following ] for a few weeks now (because I like gossip apparently) and I've noticed several incidents involving persons suspected of using AI for editing. Is it time to create an official policy to point to? ] exists, but it's an informational article rather than a policy page. (I don't actually know which page to put this on so it's going here.) ] (]) 02:25, 25 December 2024 (UTC) | |||
The presumption that SI units are not widely used in the USA really applies to the household. Many industries have converted to SI, especially industries engaged in international trade. On the other hand, there is one area of high technology where inches are in common use, computer printers, with terms such as dots per inch and pixels per inch. ] 21:28, 22 May 2006 (UTC) | |||
:] is ongoing now for those interested. ] (]) 02:33, 25 December 2024 (UTC) | |||
I think that in every place it is relevant that we should use both metric and American measurements. Metric may be used in the majority of the world, but a large percentage of people on the English Misplaced Pages are from the United States and have no sense of scale in the metric system, no matter how much they see it in their life. Foreign articles as well as American articles should use it. Finding conversion calculators online is ridiculously easy, so look one up, convert the two measurements, and put it into the article. And the wide availability of these things shouldn't be an excuse ''not'' to put them on here, as this is an encyclopedia and should be as NPOV as possible, and should accomodate as many people as possible. I don't want to look up a conversion calculator every time I see "163 kilometers" or "26 degrees Celsius" or whatever. I want to be able to know what the American measurement right there, and if I used metric measurements and it only had Imperial on the page, I would want to see the Metric conversion. ] 23:54, 23 May 2006 (UTC) | |||
:See also ]. If this is an issue you're interested in, you might want to try getting involved with ]. ] (]/]) 03:22, 25 December 2024 (UTC) | |||
:Please see ] for an essay on this topic. Folks have been discussing this issue for a while now, whether or not it can become policy is another debate. — <b>]:<sup>]</sup></b> 19:24, 25 December 2024 (UTC) | |||
:I think that Misplaced Pages should explicitly favor metric measurements, as they are universal and (despite what Bob says) more easily understood and converted. As has been pointed out, there are significant divergences within US/Imperial units, in particular liquid measurement, but also the long/short ton, and not all units are widely understood (stone, furlong). I'd have no objection to customary units being given alongside metric ones. ] <sup><font size="-2">]</font></sup> 11:30, 24 May 2006 (UTC) | |||
::Of note, there was an RfC in October 2023 where there was a ] promoting that essay to a policy or guideline. ] (]/]) 19:31, 25 December 2024 (UTC) | |||
::The problem with favoring any one system of measurements over another is that no single system is appropriate for all possible topics. The dimensions to ] are specified in ]s, ] used ]s to space its milecastles, the ] is specified in ], and there is no ] book titled '']''. Other articles depend on sources that use ]s, ]s, ], ], or ]. Trying to shoehorn these dimensions into SI units is not only elitist, but in many cases will violate ]. Instead of forcing everything into a single system, each article needing to list some type of measurement should use a measurement appropriate to the subject's nature, place, and time in history. It is only when this is done that conversions to aid the reader should be added. --'']'' <sup>]</sup> 13:58, 24 May 2006 (UTC) | |||
:::No, this is the same anti-metric argument given by people who think we'd have to sing "I'd walk 1,609,300 kilometers for one of your smiles"... Of course, articles should use non-metric units if the article is a) about those units b) about something that frequently uses non-metric units (such as, say, pipe widths, or certain sports, although metric equivalents should be given), or c) uses historic material or direct quotations that refer to such units. But if none of these criteria are met, then metric units should be the primary ones used. ] <sup><font size="-2">]</font></sup> 22:39, 24 May 2006 (UTC) | |||
::Your statement that metric units are "more easily understood" is quite simply wrong when it comes to a large percentage of our readers. Americans, as well as to a substantially lesser extent Brits/Canadians/Australians, do not understand metric units as well as they understand imperial units. You were born in Britain and currently live in Germany; if you were born and lived in America, you would not think metric units are more easily understood. The only thing that's easily understood by ''all'' our readers is metric ''plus'' imperial, and that's why we should always use that. —] (] • ]) 04:47, 25 May 2006 (UTC) | |||
:Actually, I'm a US citizen, and know well how "confusing" many Americans wrongly assume the metric system to be. However, starting from a position of unfamiliarity, metric units are more easily understood, as they are far more logical, are interrelated, and are based on multiples of ten, and are likely to be understood by a far greater number of the billion or so fluent speakers of English, including a good number of educated Americans. (Thanks for reading my biography, but I never make it clear how long I've lived anywhere. It's a little like the userbox debate.) ] <sup><font size="-2">]</font></sup> 08:06, 25 May 2006 (UTC) | |||
:: In the UK, we are increasingly using metric units. Food has to be sold in metric units now, except in important cases (eg of beer). Petrol (!) is sold in litres. Energy is measured in kW hours. However, precious metal are still in $/Troy Oz. Are people in the US becoming increasingly metric as they get more internationalised? ] 08:23, 25 May 2006 (UTC) | |||
::: There are exactly two metric units in everyday usage in the US: multi-serving bottles of soda are sold in liters, and electricity is measured in kilowatt-hours. Everything else is US Customary. --] 09:50, 25 May 2006 (UTC) | |||
:::A large reason for the shift in the UK is due to ]'s packaging regulations. Although European trade regulations have some influence on American companies (it's easier to make one package that is salable both in the EU and in the US, and the EU's regulations are more strict), the influence is not nearly so strong as in the UK. — ] ] 05:33, 27 May 2006 (UTC) | |||
::::American packaging regulations mandate metric units alongside non-metric units. See ] and the . ] 00:14, 29 May 2006 (UTC) | |||
== reservasion in educational admision based on caste == | |||
do you beleive in caste sysetm !! | |||
this is the basic question | |||
and if so , do you not beleive in global community | |||
we as humen being are same | |||
only economic conditions may requoire help and assistance for thre | |||
upliftment | |||
we can not keep reserved seat only on grouds of caste basis | |||
doing so will hamper the rights of other deserving people | |||
<small>-- <anon> </small> | |||
: Err, ok? In what context is this? ] 20:42, 14 May 2006 (UTC) | |||
:I can't see what impact this has on Misplaced Pages policy. ] 18:47, 18 May 2006 (UTC) | |||
::I vote yes. --] <sup>]</sup> 05:11, 24 May 2006 (UTC) | |||
:I don't think it has any relation to Misplaced Pages policy at all. '''Soapbox comment.''' <b><i><font color="#FF00FF">~Kylu (]|]) </font></i></b> 23:18, 29 May 2006 (UTC) | |||
== User boxes episode 12,000 - a better solution == | |||
Hi all, I really go crazy of the depressing flood of user boxes this Misplaced Pages has been undergoing last year. I am a strong proponent of deleting '''all''' of hem, perhaps except for the language skill templates, the location templates and the WikiProject boxes, that's it. However, I have a more fundamental solution for the user boxes problem: discard the user page namespace. Misplaced Pages is an encyclopedia and it is not intented for vanity. However this rule does not seem to apply for user pages - they are sometimes expended to sheer home pages or web logs. We should put an end to this. What I envision is a non-wiki page, comparable to "my preferences", where you can enter your real name (optional, of course!), your nationality/location, your profession, your expertise, your language skills and your home page (if you have one). Single login should automatically generate interwiki links. That really is enough. This topic borders to being better fitted for the technical village pump, btw.<br> | |||
Are there any people that agree with me? Here an example of what such a page might look like (all names are fictional!): | |||
::<big>'''User:HaikuReader'''</big> | |||
::'''Real name''': George F. Williams | |||
::'''Location''': ], ], ] | |||
::'''Profession''': ] | |||
::'''Expertise''': ], ] | |||
::'''Language skills''': ], ], ] | |||
::'''Homepage''': http://www.readmyhaikus.net (''just a silly example!'') | |||
Looks quite trimmed. Personally it would feel like a liberation to me! | |||
] 21:57, 14 May 2006 (UTC) | |||
:I know a lot of users that use their page for reference (as in, useful wikilinks, to-be-made articles, statements of personal Misplaced Pages philosophy, and so forth), not to mention as a sandbox for works-in-progress. It would seem unfortunate to remove all of them. ] 22:01, 14 May 2006 (UTC) | |||
::This actually seems like a much ''worse'' solution... userpages have a purpose that helps the encyclopedia, restricting them to raw information that's largely irrelevant is not the way to go. Sorry.--] <sup><font color="#FC0FC0">]</font></sup> 22:06, 14 May 2006 (UTC) | |||
:I think this is a terrible idea. The problem is that some users work on the philosophy that biases shouldn't be displayed, fearing division, the idea that people will act out the biases on their user pages, etc. On the other extreme, you have people like me who highly recommends biases are openly declared instead of hidden. This way editors can acknowledge their biases as they edit, and allow everyone to clearly recognize and negotiate their differences. In an ethnography class, I was even instructed to write down biases before conducting an interview, so that I could accomodate for them, rather than pretending they don't exist. No one on Misplaced Pages has a "neutral point of view", and I find it dangerous for users to pretend that they do. Not to say I'd actually suggest forcing all users to list their own biases, but the idea of forcefully censoring this information raises a red flag with me. ] 23:50, 14 May 2006 (UTC) | |||
:I agree with the above users - this idea would not be a satisfactory solution in any way. Looking at my userpage, you will find that I keep it relatively functional and with few frills, as you suggest, yet I treasure the ability to keep things somewhat personalised, and I think community interaction and statement of character on a userpage is a large part of the Misplaced Pages community. Being Misplaced Pages editors does not prevent us from also being humans, and if we remove the human aspect from Misplaced Pages, we might as well just create bots to make articles and abandon Misplaced Pages to the bots altogether. <span style="font-size:90%;">—'''<font color="sandybrown">]</font><font color="green">]</font><font color="cornflowerblue">]</font>'''</span><sup><font color="olivedrab">]</font>|<font color="steelblue">]</font></sup><span style="font-size:85%;">, ], ] ] @ 00:38 ]</span>''' | |||
::Ok, thanks for your comments. Actually, my proposal is a little bit exaggerated - it was a statement rather than something I'd really try to get achieved. I wanted to make clear than user pages are a means of communication, ''not'' an end in themselves. Especially the huge flood of user box templates disturbs me. I hope we can agree at least on ''that'' (I mean that user pages aren't an end themselves)? ] 12:23, 15 May 2006 (UTC) | |||
Is it feasible to have pages only visible to the logged-in users, where they can list articles in progress or have a personal sandbox? That's already the case for personal watchlists. ] 18:53, 18 May 2006 (UTC) | |||
:here's an idea. While it is useful to have google etc. crawl article talkpages, there should really be no reason for ''readers'' to even see user namespace. We've had all sorts of privacy issues with user pages before, and issues with radical agendas (nationalist, racist, what have you) pushed in user page essays. So I think the idea of making user namespace visible to logged-in users only should be thrown about a little bit more. Also, while there will always be personal sandboxes reflecting the maturity of the respective editors, a solution could be to prohibit template transclusion in user namespace: People are free to have puerile stuff on their userpages, but the problem with userboxes is that the puerility is spilled all over (the ''non''-user) template namespace. This must stop. At the very least, create a User_Template: namespace for the things. ] <small>]</small> 10:11, 19 May 2006 (UTC) | |||
But what about User_talk? Any restriction of the User page will simply result in its content being dumped into User_talk, so the only way to make it work would be to restrict both of them. Considering how vital User_talk is to Misplaced Pages's functioning, that would seem a very serious move. ] 17:44, 23 May 2006 (UTC) | |||
This is a very funny proposal. Let's say the box you propose is placed at the left side of User_talk and you can also add optional photo above it. Try to imagine that layout. Now go to http://www.myspace.com and open a random user page! The similarity is striking, isn't it ;) ] 18:28, 23 May 2006 (UTC) | |||
== crime == | |||
The Misplaced Pages does not support the action of no criminal, much less protects the same. It's correct. | |||
what's this: ]? --] 08:10, 18 May 2006 (UTC) | |||
:What's the punishment will be the people fight will be its rights? I don't know, but it's one big news for any journal or TV. "Its rights are only been valid outside the wikipedia!" --] 08:31, 18 May 2006 (UTC) | |||
What do you mean by that? – ] (] | ]) 08:33, 18 May 2006 (UTC) | |||
Which is the punishment by fight to his rights? Someone spoke me: you would be blocked in the wiki. It's truth? | |||
--] 23:13, 25 May 2006 (UTC) | |||
:I think the answer you are looking for is: | |||
:Not all lawsuits (legal threats) regard criminal law. If you say something I dislike, and I threaten to sue you for ] or other similar charge, it has a ] on the development of the development of Misplaced Pages. Other people might not want to write an article about me, for fear of being sued. While this would be good for me (since I can go on strangling kittens for a hobby), it's not good for the public, since they look to Misplaced Pages for information on my kitten-strangling habits. | |||
:The ] speaks for itself, it has nothing to do with legitimizing criminal behaviour, please don't misunderstand and think that it does. <b><i><font color="#FF00FF">~Kylu (]|]) </font></i></b> 23:30, 29 May 2006 (UTC) | |||
== Format of the word "ebook" == | |||
Can we come to consensus on the standard punctuation and capitalization of this? In the ''Oxford English Dictionary'', the only use of the letter "e" as a prefix meaning "electronic" is for "email", all lower case. Knowing the OED, and knowing that we don't capitalize Book (or Tape or Download or any other adjective or noun as a generic format), I am nearly positive that the standard spelling is, or will eventually be, '''ebook''' (not e-book, eBook, or Ebook, except as forced by Misplaced Pages title limitations <g>). Can we work this into the naming conventions somewhere? I have posted this at ] as well. It came up during a recent ]. | |||
{{unsigned|Pegship}} | |||
* I fully support "ebook" (and "email"), and would be happy to see a firm policy in favor of these. However, I doubt very much there's any chance of consensus on this. The wretched "e-mail" appears to have won market share, but this and other conflicting spellings are not any more likely to be finally settled, by the OED or anyone, than "aluminum" vs. "aluminium". I think it's going to be like most other conventions on Misplaced Pages, user's choice. Do you know what the current statistical distribution of "ebook", "e-book", "eBook" and "Ebook" is, across all articles? \ ] \ ] \ ] \ 18:13, 18 May 2006 (UTC) | |||
::A simple search turns up 450 articles on the English Misplaced Pages which use "e-book", 867 which use "ebook". I'm unable to break it down by type case; anyone got a tool for this? ''''']''''' 04:16, 19 May 2006 (UTC) | |||
:::This one also gives a sample: http://en.wikipedia.org/search/?title=Special:Whatlinkshere/EBook&limit=500&from=0 (that is: after I cleaned out the ]) | |||
:::Note that Misplaced Pages tries not to be ] about such issues, a bot would change all occurences of a term to another variant in a swiff. The criteria is ''occurences outside wikipedia''. When the main "trusted" dictionaries (OED, Webster's,...) contain no information or contradict, all non-marginal occurences are usually accepted. Note that making deductions from *other* dictionary entries would constitute ], also not used as the basis for what happens in wikipedia's main namespace. | |||
:::Anyway, I changed the start of the ] article to "An '''eBook''' (also: '''e-book''', '''ebook''') is an " --] 07:55, 19 May 2006 (UTC) | |||
I do hope we don't end up with "eBook" - that's a monstrosity.--] 22:29, 20 May 2006 (UTC) | |||
:Misplaced Pages is not normative. --] 06:22, 21 May 2006 (UTC) | |||
::Having made something of a something of a (hour+) survey outside wikiP using google links while working in parallel with ] (See: ]) I think English speaking society as a whole is just plain stuck on this minor conumdrum. Trying to cut the gordian knot, I've proposed an elimination vote '''(in progress)''' on that link to obviate the 'eBook' form, as a first step here on Misplaced Pages. | |||
::FYI: '''''' is contrary to guideline (substansive difference of opinion on talk) and if you give the talk a close, scrutiny, has been rolled back once before, iirc. But that's trivia, and trivial. I praise the good bold effort, but'' 'this gordian knot' is not so simple''. | |||
::My read of the matter is''' 'academia' '''(language constructionist types, normatively oriented, if you will <g>) prefer the''' 'e-' prefix'''. Whereas, the '''commercial interests''', like publishers, mainly prefer the''' 'eBook' or 'ebook' '''variants. | |||
::In the end, our Misplaced Pages ability to ''redirect'' to some normative form will be necessary for all three flavors insofar as having article titles visible on the web have importance. Given that such visibility is a desirable state, the current 'chaotic' mish-mash 'within our articles' is actually good for Misplaced Pages's benefit—it simply gives more hits per any given search. (As a cold hearted capitalistic pig at heart, changing nothing, gives free advertising, if you will. Now where can I buy stock?? <g>) | |||
::OTOH, what we call base quantities such as a category or our 'main articles' (e.g. ]) would be nice to have settled ASAP, as frankly, the fact that those names are unresolved and unsettled is simply and directly cutting into productivity. (e.g. Adding books to ] ''came to a screeching halt'' over said category naming issue, which in turn has devolved to the naming discussion on ], where I hope you'll all drop in, and lend a vote!) | |||
::My gut feel is the 'compromise' form ''''ebook'''' will be bearable enough to eventually carry the day, but I can live with any form so long as it stops sucking up man-hours of productivity <g>. Please do lend your assistance and lend your voice, errr... vote! Best regards, // <B>]</B><font color="green">]</font> 02:10, 25 May 2006 (UTC) | |||
I would like to reiterate that either '''ebook''' or '''e-book''' makes sense, but capitalizing ''any'' letter in the generic term does not, as "e-book" is not a trademark or proper noun. "''...we don't capitalize Book (or Tape or Download or any other adjective or noun a generic format)..."'' I don't for a moment believe that what we decide will have a big impact on external use, nor is it carved in stone. I would just like to try to standardize it generally here ''for now''. Cheers, <font color="green">♥</font> ]<font color="green">♥</font> 14:31, 25 May 2006 (UTC) | |||
== Deletionism facing (Judaism) articles == | |||
Hi, I have just placed the following on the ], it touches on a broad range of issues. Thank you. ] 09:44, 19 May 2006 (UTC) | |||
<div style="padding:1em;border:1px solid black"> | |||
:Shalom to everyone: There is presently a very serious phenomenon on Misplaced Pages that effects all articles. Let's call it "The New Deletionism". There are editors on Misplaced Pages who want to cut back the number of "low quality" articles EVEN IF THEY ARE ABOUT NOTABLE TOPICS AND SUBJECTS by skipping the normal procedures of placing {{]}} or {{]}} tags on the articles' pages and instead wish to skip that process altogether and nominate the articles for a ]. This can be done by ''any'' editor, even one not familiar with the subject. The implication/s for all articles related to Jews, Judaism, and Israel are very serious because many of these articles are of a specilaized nature that may or may not be poorly written yet have important connections to the general subjects of Jews, Judaism, and Israel, as any expert in that subject would know. | |||
:Two recent examples will illustrate this problem: | |||
::1) See ] where a notable Orthodox synagogue was deleted from Misplaced Pages. The nominator gave as his reason: "Scarce material available on Google, nor any evidence in those results of notability nor any notable size." Very few people voted and only one person objected correctly that: "I've visited this synagogue, know members, and know that it is a well established institution" which was ignored and the article was deleted. (I was unaware of the vote). | |||
::2) See ] where the nominator sought to delete the article about Rabbi ] because: "It looks like a vanity project to me. While he does come up with many Google hits, they are all commercial in nature. The article is poorly written and reads like a commercial to me." In the course of a strong debate the nominator defended his METHOD: "... what better way to do that than put it on an ] where people who might know more about the subject might actually see it and comment rather than slapping a {{]}} and {{]}} template on and waiting for someone to perhaps come across it." But what if no-one noticed it in time and it would have gone the same way as "Congregation Zichron Kedoshim"? Fortunately, people noticed it, no-one agreed with the nominator and the article was kept. | |||
:As we all know Googling for/about a subject can determine its fate as an article, but this too is not always a clear-cut solution. Thus for example, in the first case, the nominator saw almost nothing about "Congregation Zichron Kedoshim" on Google (and assumed it was unimportant) whereas in the second case the nominator admitted that Berel Wein "does come up with many Google hits" but dismissed them as "all commercial in nature". So in one case too few Google hits was the ''rationale'' for wanting to delete it and in the other it was ''too many'' hits (which were dismissed as "too commercial" and interpreted as insignificant), all depending on the nominators' ] of course. | |||
:This problem is compounded because when nominators don't know ] or know nothing about Judaism and its rituals then they are at a loss, they don't know variant ] spellings, and compounding the problem even more Google may not have any good material or sources on many subjects important to Jewish, Judaic, and Israeli subjects. Often Judaica stores may be cluttering up the search with their tactics to sell products or non-Jewish sites decide to link up to Biblical topics that ''appear'' "Jewish" but are actually missionary sites luring people into misinformation about the Torah and the Tanakh, so while Googling may yield lots of hits they may mostly be Christian-oriented and even be hostile to the Judaic perspective. | |||
:'''Therefore, all editors and contributors are requested to be aware of any such attempts to delete articles that have a genuine connection to any aspect of Jews, Judaism and Israel, and to notify other editors'''. | |||
:'''Please, most importantly, place alerts here in particular so that other editors can be notified'''. | |||
:Thank you for all your help and awareness. ] 08:43, 19 May 2006 (UTC) | |||
</div> | |||
:how is this related to Judaism in particular? Pending evidence to the contrarly, these are simply uninformed deletions for lack of proper references in the stubs deleted, without any sort of political agenda. Just recreate with proper references, and your article will be safe. ] <small>]</small> 09:56, 19 May 2006 (UTC) | |||
:: I think articles for deletion in general is a mess. Sometimes we get (legal) threats over it, I gather. Even the international press occaisionally catches a whiff of it. But as it stands, not even Jimbo Wales dares to delete it. (Ed Poor did try, and lost all his privs. :-/ ) ] 10:15, 19 May 2006 (UTC) | |||
:::IIRC, Ed was de-adminned for deleting the ''criticism'' of his deletion of VFD. Deleting VFD itself was audacious enough that he got away with it. --] 03:26, 20 May 2006 (UTC) | |||
::::Technically, that's why he lost his bureaucrat status.<sup><small>]</small></sup> He was de-adminned for his conduct in a mostly unrelated situation involving {{user|FuelWagon}}.<sup><small>]</small></sup>--] 04:23, 20 May 2006 (UTC) | |||
:You make it sound like there's a mass deletion (dozens of articles) which faces only judaism articles. Please don't make Misplaced Pages the scene of conspicarcies. ] 17:00, 19 May 2006 (UTC) | |||
Articles on individual Christian churches/congregations get deleted regularly, because they aren't considered notable unless they reach a certain size or do something to distinguish themselves from every other church within their denomination. Synagogues (or Judaism-related topics generally) aren't being singled out. From the deleted history, this particular one did not have any information of substance beyond an explanation of its name. You could always try ], but I can't see that succeeding here. ] 17:27, 19 May 2006 (UTC) | |||
Absolutely nothing improper has been done here. Sending articles straight to articles for deletion is completely legitimate. If it wasn't vast amounts of articles would never get dealt with. ] 18:16, 20 May 2006 (UTC) | |||
:If you are concerned that certain AfDs aren't getting the proper attention, can I suggest you create a ]-type list and have people monitor what's going on in AfD? Many of these lists are inactive, but some (like ]) are active and really help keep track of what's going on. Regards, ] 19:09, 20 May 2006 (UTC) | |||
For the first page, I feel I must argue in favor of deletion. Only 124 Google hits? Of course, I highly doubt that the user's vote was simply ignored. It's just that the deletion "votes" had much better arguments (show, don't tell) than the keep "vote". --] | ] | ] | ] 04:35, 21 May 2006 (UTC) | |||
:Maybe it is worth to create a guidelines say ], something like ]. The criteria for notability of rabbies seem weird for the ignorant people like me. So it might be worth to spell them out ] 01:52, 24 May 2006 (UTC) | |||
==Wikimedia violating GFDL?== | |||
When editing articles, we "agree to license our contributions under the ]." Now the GFDL allows free redistribution and modification, ''as long as'' the original authors are credited. These credits are done via the page history, where the usernames of contributors are noted, with a link to the users' pages, where they can, if they so wish, identify themselves with their real names. Thus, as long as the entire database is being distributed, there is certainly no violation of the licence. | |||
Now, I noted that you can stripped versions of the database, containing ''only the current versions'' of the articles. This appears to be a violation of the GFDL, since there is no way to link texts to their copyright owners if the full page history is not provided. To the best of my knowledge, contributors do not yield their copyright to Wikimedia, with Wikimedia in turn publishing the material under the GFDL (in which case a single "from Misplaced Pages" credit would suffice); rather, they publish their contribution under the GFDL, allowing Wikimedia and anyone else to host it with proper attribution. All Misplaced Pages mirrors that do not also mirror page histories are of course also in violation of the GFDL if I am correct, but it strikes me as particularly questionable if even Wikimedia itself offers GFDL-violating database dumps for download. ] <small>]</small> 09:53, 19 May 2006 (UTC) | |||
: Yikes. I think database dumps should be looked into more closely in general, right now they're not that great. Don't forget that our dev team is chronically overworked though. They won't like this. ^^;; ] 10:13, 19 May 2006 (UTC) | |||
: Hm. I can't look at the dumps right now, but in my understanding providing the full content of all the revisions isn't necessary - providing a list of everyone who contributed is. Whether the "current-only" dump contains such a list for every article I don't know, but if it doesn't that should be added. --<span style="font-family:monospace"> ] </span>] 11:54, 19 May 2006 (UTC) | |||
::It would be nice to have some sort of "authorship" algorithm that determines the main authors of the current version (discounting minor edits and reverted contributions), the articles could then with an "authors" tab, or even with a small footer, listing the main authors, if possible sorted by the amount of text contributed. I know this is not trivial, and should maybe be tweakable. But we could then request that mirrors provide this list of authors. As it is, most mirrors just have "from Misplaced Pages" (if even that), which clearly isn't sufficient. If you just list everybody who ever touched the article, the actual authors will be indistinguishable from vandals, people who reverted the vandals, and spellchecking or disambiguation edits. Clearly, correcting a typo is not a copyrighteable feat, and consequently the corrected version doesn't qualify as a derivative product. Whatever you distribute, it has to be clear who wrote the actual article; otherwise, you could just as well list the entire 1 million userbase of Misplaced Pages with every article, which would certainly mean that the actual authors are listed ''somewhere'', but that would hardly constitute proper attribution. ] <small>]</small> 15:24, 19 May 2006 (UTC) | |||
:Hmm, this certainly neeeds to be looked into by someone higher up the WikiMedia foodchain. Although everyone probably understands the why of the database downloads, there definetly seems to be a potential legal problem here - ] 16:52, 22 May 2006 (UTC) | |||
::Does the full authorship need to be mentioned in the distributed copy, or could you just put a disclaimer that says, "For a full list of authors for this article, please see the revision history for the article in question on www.wikipedia.org" ? <b><i><font color="#FF00FF">~Kylu (]|]) </font></i></b> 23:37, 29 May 2006 (UTC) | |||
:::I don't know if the GFDL specifies. I'm pretty sure that most people are okay with just a linkback, though. —] (] • ]) 02:41, 30 May 2006 (UTC) | |||
::::Considering that the GPL and GFDL both require the inclusion of the fulltext license with redistributions, out of fear that URLs may become outdated or invalid, I'd be surprised if they'd allow indirect copyright owner lists. But this is a matter for Wikimedia's official legal people to look at. ] 02:48, 30 May 2006 (UTC) | |||
== New Criterion for speedy deletion of ridiculous vanity == | |||
At ] we get a large number of articles like ], which contain adverts, whether added by a well-meaning inclusionist or a conniving exploiter, of such '''''patent inappropriateness''''' that it seems highly roundabout to go through the whole rigmarole of AfD, or even Prod, to get rid of them. For example, the cited example reads: | |||
:Zonko's Store Branch of Services.com | |||
:Welcome, to Zonko's Store Branch of Services Server Edition! or ZSBoS SE! We work with computers such as Windows XP, Mac OS X and even the Windows 95, 98 and 2000 editions. We work with Pocket PCs, Tablet PCs, Laptops, desktops, palm devices, iPods and hardware. | |||
:If you have any computer difficulties pelase e-mail erc1995@yahoo.com. '''You may feel free to leave messages in the "Discussion" tab of this page.''' (emphasis added) | |||
As far as I can see, this is not covered under any speedy deletion criteria, but i think it should be. This editor is clearly using wikipedia for apparently commercial purposes, in a cynical and blatant way. I propose some kind of {{db-vanity}} rule. Since every speedy deletion has to be taken care of by an admin, any bad faith or questionable speedy nominations would still be sent to AfD or whatever, but a criterion of this nature would help speed up the wikipurging process no end. ] 02:01, 20 May 2006 (UTC) | |||
:In my opinion, this one is covered by the existing #7 under Articles (see ]). -- ] <small>(])</small> 04:03, 20 May 2006 (UTC) | |||
:We've got a perfectly good deletion criteria that covers this: ]. --] 04:38, 20 May 2006 (UTC) | |||
::A7 states "An article about a real person, group of people, band, or club that does not assert the importance or significance of its subject", are we allowed to include other types of article in this category? Because if so then there's no problem, but its a slippery slope to bad faith nominations to start twisting CSD ambiguities to include whatever articles we want. Also, re:], what do you mean? ] 11:39, 20 May 2006 (UTC) | |||
:::I'd sooner support broadening A7 to include "organization or company" as well as "group of people" (although, pedantically, a company IS a group of people, it's not the intent) than to use IAR to delete them. That way lies divisiveness. So I do support changing A7 as suggested. '''<font color="green">]</font><font color="blue">]</font>''']: ]/] 12:35, 20 May 2006 (UTC) | |||
::::I'll support you. A7 does not appear to allow me to delete spam or adverts about a company. Thought it can be streched at times. This means I'm using prod for some of these or leaving the article on speedy for someone else to decide if the article meets speedy criteria. ] 22:46, 20 May 2006 (UTC) | |||
::::A7 shouldn't cover companies. Why not just stub it and prod it, or send it to AfD and be done with it? --] <small>(])</small> 01:42, 21 May 2006 (UTC) | |||
:::::Because a large number of these are not worth wasting AfD's time and effort on. I'm not talking companies of dubious notability, I'm talking bona fide spam. ] 02:35, 21 May 2006 (UTC) | |||
:::::Why shouldn't A7 be allowed to cover companies? -] 02:51, 21 May 2006 (UTC) | |||
Pages like that are routinely speedied as crap. We probably don't need an extra CSD for them. --] 02:46, 21 May 2006 (UTC) | |||
:Ok, thats fine, but which CSD does it "crap" fall under? ] 12:49, 21 May 2006 (UTC) | |||
The policy change that was approved in December 2005 was: ''"In short, my proposal is to expand CSD A7 to include non-notable groups of people as well as individuals. This would apply to bands, clubs, '''organizations,''' couples, families, and any other collections of individuals that do not assert their importance or significance."'' (my highlighting) CSD really should apply to organizations, because that was the wording that was voted on and approved. So why doesn't the current wording reflect this? Because of edits like . ]]]\<sup><font color="gray">]</font></sup> 05:12, 21 May 2006 (UTC) | |||
*Maybe because organizations are not considered companies? If they are, and this is put into policy somewhere, it would reduce the load on AfD and Prod. ] 20:39, 23 May 2006 (UTC) | |||
The worst of crap is often deleted by G2 (vandalism). I would estimate that over half of all advertising articles get deleted for being copyvios of the companies' webpages. Otherwise, we always have a chance to improve an advertisement to a reasonably neutral article, so all is not lost if we fail to delete those things speedily. ] ] 14:26, 24 May 2006 (UTC) | |||
:I've encountered an interesting loophole there. It has already been decided, on ], that spam is a form of vandalism, but spam is not included in the definition of vandalism on ]. That disparity leaves doubt in the minds of some admins as far as whether or not spam can be speedied as vandalism. Some will do it, some won't. Reconciling the CSD definition with the actual vandalism definition should solve the problem. ] 14:48, 24 May 2006 (UTC) | |||
::I would distinguish between pure spam and an article on a company with a clear pro-bias. Biased pro-corporation articles things can usually be cleaned up to form a neutral article with the information that is there and is probably not a speedy candidate. ''Spam'' on the other hand (e.g. "Having trouble with your love life? We offer the market's best deal on the remedy! Get 100 Viagra tablets for just 10 dollars. Click here for a great offer!") is well within the boundaries of vandalism and speediable. ] ] 06:11, 26 May 2006 (UTC) | |||
== Offensive comments in afd discussions == | |||
I find this edit upsetting and offensive , (instead of voting delete, to vote "kill" on an article about a member of the ] who was shot by the ]). Is there any policy or guideline that would prevent users making edits such as this as I feel such comments can only add a feeling of hostility to the project which should not be what we want here. ] 01:37, 21 May 2006 (UTC) | |||
:Get over it. ]|] 03:41, 21 May 2006 (UTC) | |||
:::As Zoe said, forget about it; you don't have the right to freedom from offense. Negotiate the objective facts of the situation. Reason is the most effective way to quell emotional responses from others. --] 13:09, 21 May 2006 (UTC) | |||
::::Sorry but I'm still not happy. There should be a no tolerance policy to these sort of veiled threats, to whichever or whatever group of people they refer to. ] 16:30, 21 May 2006 (UTC) | |||
:If you're looking for a generic "play nice, don't be an asshead" policy, sure, ]. If you're looking for one that specifically says that people should be nice when voicing their desire to delete articles, I don't think such a policy exists (and I don't think one is required, for that matter). | |||
:That said, you may also want to consider ], and in any case, no policy ''prevents'' anyone from making an edit, though a policy may dictate that certain edits have consequences. Personally, I'd recommend talking about it with the person in question -- if only to ensure that he really was rude on purpose, as opposed to just choosing his words poorly but without malice, for example -- before making a bigger issue out of it. (Then again, in all honesty, I don't find the comment in question particularly offensive. That may just be me.) -- ] 03:43, 21 May 2006 (UTC) | |||
:The comment was perhaps ill-chosen, but I very much doubt there was any malice behind it. ''Kill'' is ubiquitously used to mean "get rid of". I wouldn't worry about it. —] (] • ]) 05:30, 21 May 2006 (UTC) | |||
::I find it hard to assume good faith when I can find no instance where Humus Sapiens has used that phrase in any other afd discussions. I would ask for all people to please keep their religious/national bigotry out of Misplaced Pages. There is a real problem with people who are on opposing sides of various conflicts either ganging up to vote on "''the enemies"'' in afd or cfd discussions or adding partisan information to articles and claiming it must be added to "balance POV". The main problems lie in these areas: Jewish-Palestinian conflict, Northern Ireland conflict, Turkish-Kurdish conflict, Albanian-Serbian conflict. I'm sure there are others but these are the ones that have caught my attention. ] 08:56, 21 May 2006 (UTC) | |||
:::Okay. Even if we assume that he was not making a joke or choosing his words poorly or just otherwise shooting off his mouth, and was actually out to offend -- all of which strike me as fairly strong assumptions to make without actually checking with him to see what his actual intentions were -- what would you like to do about it? -- ] 09:10, 21 May 2006 (UTC) | |||
::::I am sure they must have been aware of the possible interpretation of the word "'''kill'''" in a discussion about a person who was "''almost''" killed by the ]. As I said above it is just another example of someone using bigoted language in discussion of article which is on ''"the other side''" of whatever conflict it happens to be. ] 09:20, 21 May 2006 (UTC) | |||
:::::Yes. And as I said, ''I'' don't find that interpretation particularly offensive -- not unless I ''know'' that there's an intentional effort on his part to offend or to celebrate or wish for the death of another human being for political or other reasons. Have you taken this issue up with him? Do you know what his intentions are? If you do, great, then you have something to go on. If you're just making an assumption, then frankly, I think you're on the wrong track. In any case, I repeat: what would you like to do about it? If you feel he has violated ], you can of course always take it to mediation, but frankly, if you don't even talk to the guy first and determine his actual intentions or give him the chance to apologize (particularly if it was ''not'' his intention to offend), that's not really your best choice. I don't think anyone here can provide you with a solution any better than that. I mean, if you just want to vent about it, that's okay too, but it isn't going to solve your problem. -- ] 10:42, 21 May 2006 (UTC) | |||
::::::Sorry, what interpretation don't you find offensive? ] 16:06, 21 May 2006 (UTC) | |||
:::::::I don't find the use of the word "kill" offensive in the context of deleting an article that is about someone getting injured or killed unless I know for sure that the word is being used with malice. But that's me; again, your mileage may vary. In any case, that's neither here nor there; we can argue about whether it's offensive or not, but that's completely irrelevant, because it won't do anything to solve your problem. So I'm just going to ask you again: What would you like to do (or see done) about this incident? If you just want to register your displeasure, consider it registered and noted. It's done. If that's not enough, do something about it -- as far as I know, you haven't even talked to the guy about this. It's entirely possible that he's not even aware that what he said is being talked about (which, to be honest, doesn't really strike me as very cool). I again suggest you either take it up with him (and if that fails, go to mediation or otherwise try and work it out) or let the matter drop. | |||
:::::::(And on a somewhat related note, your "veiled threats against people" statement above doesn't really convince me. If you are suggesting that the guy is seriously (if indirectly) proposing that Phil Reiss or other members of the ISM should be killed, I don't see it, and unless you know for a fact that it's what he wants -- y'know, by, say, taking the time to ''talk'' to him and making sure you haven't misunderstood him -- I think it's foolish to claim that he does.) -- ] 18:15, 21 May 2006 (UTC) | |||
:I think you'll have to live with the particular instance and others of word-usage. However I'd pick up that the description "minor" "non-lifethreatening" was very inaccurate. (COntemplate for a moment something hitting your head hard enough that you bleed inside, and commonly require a hole being made in your skull to depressurise it - minor? Mortality is not small.). Participants in that afd were invited to consider teh article with a misleading description of the magnitude of the injury. "Rubber" bullets are not supposed to hit people's heads, and if they do they have probably been fired negligently or deliberately to do so. ] 16:45, 21 May 2006 (UTC) | |||
::OK, I'll have to blunt. Many articles relating to ] or people or organizations seen to be supporters of the Palestinians are in danger of not being neutral because many of the contributors to these articles hold a very pro Israel POV (the nominator of this article is Jewish and five of the delete votes at least were from Jewish users). Of course it is to be expected when you have one people in conflict with another that this kind of thing will happen with a worldwide audience, a similar thing was seen in the deletion discussion on ] where many of the votes to delete were clearly from Turkish users. Unfortunately at the moment that there don't seem to be many members of the Misplaced Pages community working to stop these articles straying into favouring an Israeli point of view rather than being neutral which is sad in my opinion. ] 19:23, 21 May 2006 (UTC) | |||
:::And what does that have to do with this comment at that AFD? Do you have a point?--] 19:27, 21 May 2006 (UTC) | |||
::::It is directly related, yes. My point should be clear from the above. ] 19:32, 21 May 2006 (UTC) | |||
:::::I think you have a choice between taking this one-word comment as a good-faith (but perhaps poorly phrased) way of saying "delete." versus a deliberately offensive attempt to fan the flames, and you're choosing the latter course because it fits your account of POV-by-consensus on Misplaced Pages. You may have a point worth debating, but tying it to this particular AfD isn't a persuasive way to do that. <b><span style="color: #f33">·]·</span></b> 01:47, 22 May 2006 (UTC) | |||
::::::The editor chose to use that word when they have never used it in an afd discussion before. It is absolutely impossible that they would just have suddenly started to use that word instead of delete starting with that afd discussion without being aware of the significance of using it on that discussion. And yes the word is not really that important as you say, although I do believe it is a violation of ]. Perhaps also we should consider general veiled threats to a group as a violation of ], for example if a person voted kill on an afd discussion on a homosexual person, and there was evidence that they were homophobic, although it would not be a personal attack on anyone in particular I think it should be a blockable offense. ] 02:35, 22 May 2006 (UTC) | |||
::::::::"Absolutely impossible"! I think not. I think you overstate the significance, but in any case, if you think this user has committed some serious violation of policy, take it to ]; why are you posting on the Village Pump? Other people have told you they think you're overreacting, but you aren't inclined to accept that advice, so what do you want to happen? A mob? <b><span style="color: #f33">·]·</span></b> 03:21, 22 May 2006 (UTC) | |||
:::::::::You bring the torches, I'll bring the pitchforks. PARTY! ] ] 03:32, 22 May 2006 (UTC) | |||
:::::::Let me make sure I understand you. Are you suggesting that because of this single incident (in which there is no confirmation of the editor's motivations and in which, as far as I know, no one has even informed the editor about this ongoing conversation (and really, allow me to be blunt here, because clearly subtlety isn't working: ''before you continue making these accusations behind his back, which I find distasteful at best, please find out if he actually meant it the way you insist he did''), it would be appropriate to block the editor? -- ] 03:35, 22 May 2006 (UTC) | |||
::::::::I am not saying it is all about this single incident at all. As I said above it is just an ongoing pattern of behaviour of users on one side of a conflict "ganging up" on articles that are on the "other side". I don't really know how we can deal with that in such an open system, except by maybe informing the wider community that this is going on. If we do nothing, it will mean that Misplaced Pages will not be considered to be a neutral source of information on some very significant issues. ] 15:20, 22 May 2006 (UTC) | |||
::::::::: This was just brought to my attention. I feel that it would be only fair (per ]) to ask me first. Let's keep in mind that the vote was about the article, not the person. Of course it was a joke, perhaps tasteless, but I support the editors' right for (perhaps imperfect) sense of humor in talks & votes. Now, taking closer look at this discussion, it seems that this occasion was chosen as a ]. ←] <sup>]</sup> 00:44, 24 May 2006 (UTC) | |||
::::::::::Pointing out that there is a problem with articles related to certain conflicts is nothing to with ]. ] 01:02, 24 May 2006 (UTC) | |||
:::::::::::I thought Humus was making a reference to ]. —] | ] 01:13, 24 May 2006 (UTC) | |||
::::::::::::Good grief! When I saw it I just thought it was a funky way of saying 'delete'. I certainly didn't assume HS meant the subject of the article should be physically killed, or any other bad faith assumptions mentioned above. Arniep, why didn't you just discuss this with HS? It appears HS had to stumble on this discussion about your concerns with his statement on his own after several days. --]<sup><small>( ] | ])</small></sup> 03:38, 24 May 2006 (UTC) | |||
This complaint is ridiculous, does anybody actually believe that Arneip was actually offended? I found the basis of his "indignancy" either extremely disingenuous or unbelievably silly.- ] | ] 06:47, 24 May 2006 (UTC) | |||
: This discussion should be killed without mercy. Can't belive it has been allowed to grow to almost 14000 KB.--] ] 12:40, 24 May 2006 (UTC) | |||
You know what's far more offensive than the use of the word "Kill"? It's Arniep's statement that "the nominator of this article is Jewish and five of the delete votes at least were from Jewish users". How does Arniep know this, and why on earth would it make a difference, or even be relevant to this particular discussion? Care to explain, Arniep? ]<sup><small><font color="DarkGreen">]</font></small></sup> 13:57, 24 May 2006 (UTC) | |||
:I was wondering the same thing. ] <sup><font color="Purple">]</font></sup> 17:04, 24 May 2006 (UTC) | |||
==Morale Issue== | |||
I put a discussion on the Admin/Incident page. It is probably better here. My problem is that two people get into edit wars over content. This degrades into name calling. If one is an admin, or has a good friend who is an admin, he/she pulls rank, and virtually says, the article is this way, end of story. If the other person debates this, it becomes typically personal, and the admin will start the name calling, eg, troll vandal etc. If the other person responds, and especially breaks a rule, the admin can ban that person. Does anyone else notice this? Does anyone have solution or a way forward? Thank you. ] 08:25, 21 May 2006 (UTC) | |||
:Obviously all cases are different, but if you feel that there has been a conspicuous misuse of power by an admin, or indeed misconduct by any user, the place to raise the issue would be ]. However I would suggest that you attempt to resolve to situation amicably between yourselves before jumping straight in with an RfC, although this is unfortunately not always possible. ] 11:29, 21 May 2006 (UTC) | |||
::I would expect an admin who used blocks/bans solely to get his/her own way in an edit dispute to be desysopped, and I believe most admins would concur. | |||
::That said, if a user is harming the encyclopaedia then it may well be the duty of an admin to block them, at least for a short time. I'm a strong believer that blocks should be protective rather than punitive (i.e. they should be used to protect Misplaced Pages rather than to punish 'breaking a rule'), but I certainly think most blocks carried out by admins are justified. | |||
::In either case, as ] says above, this is what the ] is for -- but we would much prefer it if disagreements were sorted out amicably between users behaving in a friendly and adult fashion. --] 11:35, 21 May 2006 (UTC) | |||
::Incidentally, do you have any actual examples of the behaviour you describe above? I can think of no examples off the top of my head of admins engaging in edit-warring and personal attacks of the kind you describe. --] 11:36, 21 May 2006 (UTC) | |||
:::PS: Don't you mean it to be a "moral" issue? --] 13:07, 21 May 2006 (UTC) | |||
::::nope. Morale, as this can lower it. Thanks. ] 17:01, 21 May 2006 (UTC) | |||
:::No. I do not act in this way. I believe in alerting the community to something that is happening. In this way, people who are doing this sort of thing may change direction. I think it already happening. It is also more effective, if a third party gets involved and says to the admin to stop it. As far as disputes between users is concerned, this is a dispute between a user and his/her admin (ie boss), not one between equals. Naturally if the user mentioned is an admin, the admin would probably not react favorably, as most people do not like criticism, and such an admin is very unlikely to be a humble person. ] 17:00, 21 May 2006 (UTC) | |||
::::Erm. Being an admin is ''no big deal'' on Misplaced Pages. An admin is certainly not any user's boss. In a content discussion, both are indeed equal (other considerations aside), and if the admin in question does not recognize that, he or she will be reminded about it rather fast. In particular, admins are forbidden from using admin powers in conflicts they are directly involved with. --] 17:09, 21 May 2006 (UTC) | |||
:::::That's a tall tale that is used to lull people into accepting admin power. Admins are a pain in the butt and we shouldn't have them. ] 10:46, 23 May 2006 (UTC) | |||
::::I'll ask again, just to be sure: do you have any actual examples of the edit-warring and personal attacks you complain of ''actually happening''? 'I think it's already happening' isn't good enough. If it's happening, show us where, and we'll take action against it. If it's not happening, then this discussion is about solving a non-existent problem and we can all move on. --] 18:23, 21 May 2006 (UTC) | |||
==The Use of Scientific Names over Common Names== | |||
As an amateur herpetologist and former snake keeper, I recently took an interest in writing articles on the many different species of vipers (see ]). I really only started doing this in earnest back in April this year. The very first problem I recognized, was that there was no common structure to tie all of the existing articles together. For example, sometimes only a subspecies was described, but not the species, the genus or the subfamily to which it belongs. This is not very orderly -- chaotic even -- and a waste if the descriptions in such articles cover characteristics common to an entire family. Misplaced Pages obviously has a lot more growing to do, so I think it would be in the best interest of everyone to prevent this kind of disorder from getting out of hand before the number of articles on biological organisms becomes too great. | |||
The most obvious structure to apply here is systematics: the science of taxonomy and binomial nomenclature. More than a million species of animals and half a million species of plants and microorganisms have been described by science and I'm sure that it is everyone's hope and wish that we will eventually see Misplaced Pages articles dedicated to each and every one of them. However, it is obvious to myself and others that the current policy of using common names over scientific names for page titles whenever possible (the consensus for which I hear has only been more or less agreed upon) is simply not good enough to achieve this end. What we need is a standard for naming articles on biological organisms that is predictable, promotes structure and prevents the propagation of errors. The best way to achieve this is with scientific names: not common names with a needless array of redirects and disambiguation pages. | |||
Except in the case of birds, where the American Ornithologists' Union has established official common names for each species, there are often many different common names. Snakes are a good example. Take '']'': if this page were to be changed to a common name, should it be massasauga, or eastern massasauga, or ground rattler, or swamp rattler, or even Michigan rattler? Those are all recognized common names for this species, particularly in the United States, but naming Misplaced Pages's page for it should not be reduced to a popularity contest. There is one perfectly valid name for this species, recognized the world over, and that is its scientific name: ''Sistrurus catenatus''. Only then can there be no doubt regarding the subject of the article. | |||
Another thing I've noticed about Misplaced Pages, is that only the article names get indexed, as opposed to redirects and entries in disambiguation pages. This is hardly surprising, but if ten years from now we have 100,000 articles on biological organisms -- most with common names -- this will make the indexes pretty much useless. How can you be sure how many '']'' articles there are if they're scattered all across the index? There are currently 43 different species and subspecies, yet if the standard was to use only scientific names for each page, they would all be found under the T and line up neatly under the entry for ''Trimeresurus''. | |||
Let's take a further look at this with some current indexing examples. | |||
* : I'm sure there are many more bird articles than this, but it seems as if the bird folks just don't care much for indexing. | |||
*:Responding to this point in particular: According to Misplaced Pages's categorization guidelines, there should be few if any articles in ] — articles should be in the narrowest applicable subcategory. Thus, a bird article should be placed directly in ] only if it didn't fit into any of the existing subcategories (and if so, that might be a good reason to start a new subcategory). Similarly, we don't list sharks in ], nor vipers in any of ], ], ] or even ]. —] <small>(])</small> 19:40, 21 May 2006 (UTC) | |||
* : Looks like there are more eagle articles than bird articles. However, the species and genera are mixed up so you can't tell which ones are more closely related. I've only managed to find this information on the ] page, where the list of species is grouped according to their generic (scientific) names. | |||
* : There's more here, but once again, they're all mixed up. Even if you know your sharks, it's hard to see immediately if there are articles present for all the members of any particular genus. | |||
* : Indexing done properly. All the ''Sistrurus'' species are under the S and all the ''Trimeresurus'' species are under the T. This way, even if the list is 100 times as long, it's still easy to find all the articles for a certain genus and see which ones are missing. | |||
Looking again at that ] page, it reminds me of how common names do not encourage any structure when writing articles on biological organisms. There are many genera of eagles listed there, yet it seems the authors have spent most of their time producing articles only for the individual species. It would have been far more efficient to tackle the families, subfamilies and genera first. Those are the places to describe the defining characteristics of each in order to avoid having to repeat them in each of the species articles. A single eagle article to describe a number of genera is not specific enough. If all of the species are going to be described anyway, the more structured approach is also the best way to show people the differences between the various eagle genera. | |||
Recently, one Wikipedian argued that things should be left as they are, because for a "normal" person to look for "gaboon viper" and end up with '']'' would be too jarring an experience. I say that if systematics is the best way for Misplaced Pages to self-organize, then why shouldn't we encourage people to follow and learn? We can still use common names as redirects, in disambiguation pages, and even make liberal use of them if necessary in the actual articles; that way, people will still know what they are about. But just as long as we emphasize the importance and use of scientific names for organizing those articles. I also believe that this is a good way to attract more interest from graduate students, professionals and other more knowledgeable individuals who would then be more willing to write articles for us. Which is what we want, right? | |||
It is said that nothing in biology makes sense except in the light of evolution. As opposed to common names, binomial nomenclature is the best way to illustrate how the different species are related because of the way they evolved. Those names are a reflection of our current understanding of how life evolved on the planet. If Misplaced Pages is still around in some form or another 100 or even 1000 years from now, our generation will not be remembered as much for the articles that we wrote, as for the structure and organization that we imposed upon them at this early stage, which in turn allowed it all to grow properly and thrive. | |||
In case you're interested, I've argued before in favor of scientific names over common names on and on the . At one point it was suggested to me that this was the proper place for me to state my case. I hope so, as well as that reason will eventually win the day concerning this issue. --] 19:27, 21 May 2006 (UTC) | |||
Misplaced Pages policy is to use titles which are the most commonly known to the average reader, not to the scientific community. Should we call blackbirds turdus? Who's going to know that? Call it by the common name and redirect the scientific name. ]|] 19:30, 21 May 2006 (UTC) | |||
:Also, it should be noted that, as long as the appropriate redirects are in place, one can get a pretty good listing of a genus via, f.ex., ]. —] <small>(])</small> 19:45, 21 May 2006 (UTC) | |||
::Another way, of course, would be to categorise the redirects. See discussion above. If you categorised all the ] redirects in ], you would have your index. ] 19:58, 21 May 2006 (UTC) | |||
:::About the "Special:Prefixindex" comment - I've seen people linking to that kind of things from main article. I agree it is a very useful tool, but have refrained from similar linking from main article namespace, because the "Special" namespace seems to be self-referential. Having said that, there should be a way to make such browsing tools as "Special:Prefixindex" available to the reader. And the results of such discussions should be added to some guidelines document so people don't have to perennially have these discussions. ] 20:14, 21 May 2006 (UTC) | |||
I agree entirely that both common names and scientific names should be listed in the category system, but in separate areas. Thus people would be able to choose the option to browse by common name, or by genus name. One way to do this is to categorise redirects, but really any method would be OK as long as people retain the option to browse either way. We should not force people to browse only one way. Can someone please pass these ideas over to the people working on biological articles. ] 20:08, 21 May 2006 (UTC) | |||
:Of course, it is very important that common names be ''mentioned'' in the articles, but except for very well-known ones (for example, ], ], or ]; or even ]), I think that the titles should be scientific names. How many people know what a ] is, anyway? In many cases, there are multiple common names, and choosing one of them creates a ]. Sometimes, a common name refers to different species, creating confusion (see my experiences with ]). ] 20:17, 21 May 2006 (UTC) | |||
:<small>(after massive edit conflict)</small> If there's no widely-accepted common name (as seemingly is the case in your '']'' example), the only sensible location is at the scientific name. Something like ], Zoe's counterexample, should probably stay where it is. If you would like it to be possible to browse by taxonomy, I suggest you categorize scientific names, including redirects—add to ], then add that category to ], and that to ], that to ], that to ], and that to ]. Do that for every single taxonomically-named article and redirect, and you're set. Of course, it'll be a big job, but that's what ] is for. —] (] • ]) 20:19, 21 May 2006 (UTC) | |||
::The categories don't normally include the rank: ], etc. exist already. ] 20:32, 21 May 2006 (UTC) | |||
*First: Redirects are cheap. To the average reader it doesn't make any difference at all whether they found an article by typing in ] or ]. Most of these debates over article naming are, therefore, not important at all. And we can use disambiguation to ensure that both British and American readers can find the animal they are looking for under ]. This is very unlike a print encyclopedia, where it may take a big fraction of a minute to thumb through the index or take down and open a different volume if one's first guess at the entry name is wrong. | |||
*Second: Misplaced Pages's ] is, and has always been, to use the most common name. Not the right name, not the best name, not the most technically correct name, but the most common name. This has many advantages. One is that "the most common name" is ''usually'' easier to determine and to get consensus on then the most technically correct name. | |||
*Third: Misplaced Pages exists to serve the general reader, not to impress the technical reader with our degree of technical expertise. Maybe taxonomists will have a lower opinion of us for using common names, but that's not the audience Misplaced Pages serves. (Most academics don't like us, anyway). ] ] 13:49, 24 May 2006 (UTC) | |||
:A comment on your first point: I agree that it doesn't matter whether a reader tries to find something (either by searching, typing in, or browsing the category system) by common name or taxonomic name, BUT, it is important that Misplaced Pages does not let readers fall into the trap of thinking they only need to know the common name. A reader of Misplaced Pages should come to realise that in many cases if they want to know about a certain species or genus, they will need to know the species and/or genus name. In other words, the ease by which common names can be redirected to scientific names shouldn't fool the reader into thinking that the common name is "correct", or that they will be able to do this for all animals/plants/whatever. It is important to remember that people can learn things not just from the content of an encyclopedia, but also its structure. ] 16:36, 24 May 2006 (UTC) | |||
:I'd like to comment on the third point. I mainly work on plant articles, and plants often have many common names, and often have common names in common (for a taste of this, see ], a page I haven't had time to work on lately). I've also had the experience of seeing articles linked by one common name on a genus article, but the link was red. Turns out that there was an article on the plant, using another common name, but there was no link from the genus to the scientific name of the species. | |||
:I think the use of common names also inevitably entails choosing one ] over another... my neighbor and I had a laugh over what he called a black ash, and what I called a boxelder. However, if you search for boxelder, you get redirected to "manitoba maple", a name absolutely no one would use in ], and makes me think perhaps wikipedia has a ] slant. | |||
:Now that might seem a bit silly, but I hope it illustrates my point. The linnean system of binomial nomenclature was created so that people from different regions (and of course speaking different languages) could have a "neutral, objective" language to use when discussing the same organism. Doesn't it make sense for wikipedia (with it's neutral point of view) to use this tool when it's already available to us? ] 20:32, 24 May 2006 (UTC) | |||
:This specific topic has received extensive attention at ]. You're probably best off discussing it at ]. Our overarching naming convention in a nutshell is "Names of Misplaced Pages articles should be optimized for readers over editors; and for a general audience over specialists" (from ]). In my opinion, the only case in which I would prefer the scientific name is the case in which there is no unambiguous common name for the species. How silly would it be if we moved ] to ]? ] 20:52, 24 May 2006 (UTC) | |||
::Actually, I don't think that would be silly at all! It's an advantage wikip has over paper 'pedias, really... because there's something rather elegant about searching for ], and reaching an article that starts with "'''''Canis lupis familiaris''''', the '''dog''', etc., etc....". Such a beginning to an article is not only (presumably) non-offensive, but perhaps educational. Isn't that what an encyclopedia is for? ] 21:12, 24 May 2006 (UTC) | |||
:Just another thought on Jwinnus's opening: can redirect pages have category tags? That might help take care of the indexing issue. ] 10:44, 25 May 2006 (UTC) | |||
::Indeed it would take care of the indexing issue. Maybe you've been reading this ] further up the page? :-) ] 09:40, 26 May 2006 (UTC) | |||
:::Nope, I hadn't :). It's come up on the plants project talk page as well. I guess one problem with this is that the cat indices will soon be enourmous if every plant is listed numerous times (assuming they have numerous common names, and/or different binomials because of taxonomical changes). | |||
:::Anyway, why would a user (as opposed to an editor) need to look at indices? For plants, for example, there is the category "lists of plants", and these lists might tend to be much more useful to a reader than the category lists, which say very little about the pages on them. In general, when I'm looking for something, I just search for it. (My apologies for not wikifying all the pages I just mentioned... no time). ] 14:30, 26 May 2006 (UTC) | |||
::::The advantage of categories over lists is where a list will expand for the foreseeable future (and new plants and animals are discovered all the time - and for the smaller lists of species, creating a category from a list is not much work). In the case of common names vs scientific names, the category could split at suitable points (depending on the exact taxonomy and status and diversity of common names) into "categories by scientific names" and "categories by common names" - mixing the two would be disastrous. You could then have a category of all animals commonly called lions, robins and caterpillars: "african lion", "american robin" (this example deliberately chosen - I know it is not a robin), "eastern red caterpillar", but these would all be directs to disambiguation pages or pages under the scientific names. But as someone else said, this does need to be thought through carefully. But I for one would much prefer to be able to drill down through categories using the scientific names - currently you can do this by using the taxoboxes to navigate. The question is, would categorising the redirects from scientific names duplicate the work done with taxoboxes? ] 10:30, 27 May 2006 (UTC) | |||
::::As an example, using the taxoboxes, I just navigated from ] right up to domain level. It was interesting seeing various minor subrankings appear along the way: | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::*] | |||
::::So, is it worth replicating this '''taxobox navigational structure''' that uses ''scientific names'' in the '''category system''' that currently uses only ''common names''? (This would be done by categorising the redirects from scientific names in their own categories). Or rather, should all the "common name" categories have a blurb saying "if you want to navigate up or down taxonomic levels by scientific name, please use the taxoboxes, starting from..." Also, there should be a page explaining all that, which I think is at ]. This would all tie categories and taxoboxes closer together, rather than having them exist in relative isolation. ] 10:47, 27 May 2006 (UTC) | |||
Nobody is suggesting that Misplaced Pages be turned into some kind of a platform for scientific publications; only that there is a better naming convention for the articles. Binomial nomenclature isn't just popular with scientists: it's a common language for normal people all over the world who just want to make sure that they're all talking about the same organisms. The Linnean system has done a great job at this for almost 300 years, which is why not using it is big a mistake. Why reinvent the wheel? | |||
To suggest creating redirects for scientific names instead is to miss the point entirely. That is to trivialize the value and ignore the utility of the binomial system: | |||
*'''Uniqueness'''. Where the current policy offers ambiguity and disambiguation pages as a workaround, scientific names are always unique and precise. Binomial nomenclature works as a filing system and information retrieval system, giving immediate access to all known information about a particular species. Naming a Misplaced Pages article on a plant or animal should not be reduced to a popularity contest. | |||
*'''Universality'''. Any particular common name may be used a lot in some (English-speaking) countries (or on the Internet), but it may not be in others. In contrast, scientific names are universal and the same in all countries, whether the primary language is English or not. Scientific names are therefore more neutral. | |||
It is ridiculous to say that those in favor of scientific names are out to stop people from using common names or something. Even if the blackbird article was renamed '']'', most of us, including myself, would still think of a blackbird as a blackbird. However, scientific names inspire both authors and readers alike to think about the big picture. Since this particular blackbird (there are many other blackbirds) it is a member of the genus '']'', we learn that it is a thrush just like '']'', the American robin (a misleading common name). They both belong to the subfamily '']'' (true thrushes), and in turn the family '']'' (thrushes, robins, chats, and wheatears). Each of these groups has its own defining characteristics. In other words, a blackbird is not just a blackbird and the way we write and organize the articles in Misplaced Pages should reflect this. --] 12:26, 26 May 2006 (UTC) | |||
Has anyone ever suggested combining the common and scientific names in the titles? Then we would have, for example: ] and ]? Other examples would be ], and so on. ] 19:08, 26 May 2006 (UTC) | |||
:Sorry, those are just way too long and cumbersome for my taste. —] (] • ]) 04:50, 29 May 2006 (UTC) | |||
==Ownership of images== | |||
If I give my sister my camera to take a picture, then I do the downloading from the camera to my PC, and then upload it to Misplaced Pages, am '''''I''''' the "creator" of the image, or is my sister, who pushed the button? Does my sister have to give permission for release, or is it my prerogative? ]|] 19:37, 21 May 2006 (UTC) | |||
:If you said "can you take this picture for me", and she said "yes", then she has taken the picture for '''you''', not for '''herself'''. Hence you have the copyright. Morally, you could, and should credit her. She gets the credit for taking the picture, but you have the copyright and can release the picture. Also, you have the picture, not her. Ownership being 99% of the law and all that. ] 20:03, 21 May 2006 (UTC) | |||
:Your sister is the one who decided exactly when to take the shot, from what angle, of what, etc. In other words, she was the one who was responsible for all the creativity in creating the image, and creativity is what copyright protects. Your role involved no creativity and therefore is not copyrightable. All copyright belongs to your sister, and she would have to give permission (or else, if she's under eighteen, your parents/legal guardians might have to, I think, depending on jurisdiction; but Misplaced Pages has tended to ignore that, since it makes life very complicated).<p>Copyrights made for hire are owned by the one who does the hiring (17 USC § 201(b)). If the work was not made for hire, the one who created it is the copyright holder, irrespective of motive (17 USC § 201(a)). —] (] • ]) 20:09, 21 May 2006 (UTC) | |||
::So if I give her a dollar to take the picture, I'm the copyright holder? :) ]|] 20:14, 21 May 2006 (UTC) | |||
:::I don't think so, unless she "expressly agree in a written instrument signed by that the work shall be considered a work made for hire" (17 USC § 101). She needs to actually be employed by you, not just contracted. You could always buy the copyright off her later, though. —] (] • ]) 20:23, 21 May 2006 (UTC) | |||
::::Just lie and say you took it. ] 20:26, 21 May 2006 (UTC) | |||
:::::Gaaa. I just think I'll take my pictures myself. :) ]|] 20:28, 21 May 2006 (UTC) | |||
::::Don't forget that a large part of the "creativity" in taking a picture is deciding '''what''' to take a picture of. From the sound of it, Zoe asked her sister to specifically take a picture of something. In that scenario, the idea to take a picture of something came from Zoe. I know I'd be annoyed if I thought of a great idea for a picture, asked someone else to take that picture, and then found that they had run off with the picture and idea, claiming it as "theirs". ] 20:30, 21 May 2006 (UTC) | |||
:::I think you can argue that by saying "can you take a picture for me", and her replying "yes", you have a verbal contract that you have hired her to take the picture, though from reading Simetrical's comment, that is contract work, not employment work. Don't think you have to actually give her any money. As for creativity, when '''I''' give someone my camera to take a picture, I tell them exactly how to take it. Hell, I even line the camera up for them (and then go and stand in the picture). Though I suspect that you gave your sister the camera to take a picture somewhere when you were not present, so ignore this part of my rant about creativity... :-) (the earlier comment about identifying who had the idea to take the photo is still valid). ] 20:30, 21 May 2006 (UTC) | |||
::::Actually, we haven't done anything yet, I was thinking about mailing my camera to her to ask her to take some pictures in the San Francisco Bay Area, but we haven't even discussed it yet. ]|] 20:36, 21 May 2006 (UTC) | |||
:::::In that case, she would clearly be the creator since you would simply be telling her what you wanted her to take a picture of and she would be adding the creative elements (lighting, angle, framing, etc...). And the fact that you "hired" her by paying her a dollar is not enough - the statute requires that the work made for hire agreement be in writing. However, all it takes is for you to be the copyright owner is a few words (in writing) from your sister acknowledging that all photographs taken by her for you will be deemed a "work made for hire" and that you will be the owner of the copyright. (This all assumes we are talking about U.S. copyrights.) -- ] <sup><font color="green">]</font></sup> 03:52, 22 May 2006 (UTC) | |||
:Zoe, defense of copyright is a civil matter. As long as you and your sister reach an understanding regarding what will happen to the picture(s), and you are fairly confident that she isn't going to sue you because of anything you do, it doesn't much matter whether you take steps to legally firm up your position. Sure you could work up a written contract, pay her for the images, and take other steps to cover your ass legally, but I would really hope such actions are unnecessary. Just tell her it is for Misplaced Pages, that pictures need be made public (under the GFDL or similar), and get her to agree to that. You can offer her a byline on the image description page if she wants, as well. I would hope that with your sister you wouldn't need to worry about things more than that. Legally though, she would ordinarilly own the picture, so you could get into trouble if you try to trick her or release the image without her permission provided that she would be inclined to sue you over it. ] 04:21, 22 May 2006 (UTC) | |||
:It's not really very complex. Basically, all you need to do is e-mail her and say, "hey, sis, these pictures you took, is it okay if we use them for Misplaced Pages?" and when she says "yeah, go ahead, no problem", you have received permission from the copyright holder. (A phone call will do just as well, but this way you have something you can refer to if, for whatever reason, the question of whether she actually gave permission comes up. Provided that you save the e-mail, of course.) If she takes the pictures, she owns the copyright (yeah, I suppose that technically it can be argued that if you tell her exactly what to shoot, it's really your picture and not hers, but for all practical intents and purposes, if you mail her a camera and she takes some pictures, she's the creator and copyright owner), but as long as you have permission to use the pictures, it doesn't really matter who the copyright owner is -- could be Santa Claus or Jacques Chirac or me, a permission's a permission. -- ] 05:10, 22 May 2006 (UTC) | |||
::Unfortunately, images that are by-permission for use on Misplaced Pages are not acceptable. See, e.g. ]. ] ] 09:01, 22 May 2006 (UTC) | |||
:::Ah. I stand corrected, my bad. That said, surely obtaining permission that ''includes'' third party use would be just as simple. -- ] 19:14, 22 May 2006 (UTC) | |||
::::Unfortunately it sometimes isn't...many copyright owners would be willing to release their work for educational/encyclopedic use on Misplaced Pages, but would not want to see their work sold by commercial entities. This is actually a fairly common situation. ] ] 23:36, 22 May 2006 (UTC) | |||
:::::I have no doubt that this is true, but in an instance where someone is taking pictures for the explicit purpose of use for Misplaced Pages, I kinda doubt that is a problem... -- ] 06:03, 23 May 2006 (UTC) | |||
:Slight correction: the "permission" needs to include an explicit acceptance of ]/] - see ] for some tips on how to acquire such explicit permission from a third party. Zoe, I take it that you're on excellent terms of understanding with your sister. But such assumption is irrelevant for wikipedia. So, if you're on good terms with her, there wouldn't be a problem to get her permission according to the descriptions of ] I suppose, while that's the confirmation needed for uploading the picture in wikipedia. And then, in the image description (that is the "image:" namespace page where the picture is uploaded) you'd make correct attribution to the "author" of the picture, with reference to the wikipedia-compatible license conditions you agreed upon. Best to mention the attribution in the "edit summary" when uploading too, that makes the attribution "undeletable"/"stable" in later versions of the image description page. --] 09:44, 22 May 2006 (UTC) | |||
:Keep in mind that images do not have to be released under the GFDL or a license compatible with the GFDL to be used on Wikimedia projects, and especially on English Misplaced Pages, where we still permit fair use of copyrighted images. However, just getting permission to use it on Misplaced Pages is not enough - ideally, get them to release it "to the world" under a suitable free license of their choosing. Usually I find once a person has explored a few options and found a free license they like, they have no problem using it after that. Also, if it's language-agnostic, consider uploading to Wikimedia Commons instead. ] 20:43, 24 May 2006 (UTC) | |||
== Userbox collectors == | |||
This is the 2nd time I run into a case of this. Can't remember the first user. But I would likt to mention ]. | |||
It seems this person is adding himself to any type of WikiProject he can find, and collecting userboxes of projects and other UBX. However he has not made ONE single edit to an article. His account was created only 2 weeks ago. Now the fact that he collects userboxes i can care less about, but the fact that he is adding himself to all those WikiProjects is annoying, in that it clutters up the Project. It's not a real problem, but I was wondering if other people have seen similar problems lately. It almost seems like a bot (considering the bad formatting of the page). - ] 14:41, 22 May 2006 (UTC) | |||
**Not to mention that the user also claims to have six batchelor's degrees, five masters', a doctorate and a law degree, and is studying medicine. Completely bogus, all of it. Misplaced Pages is being invaded by the mentally ill, the stupid, and the sociopathic. \ ] \ ] \ ] \ 20:22, 22 May 2006 (UTC) | |||
:I've not seen this particular problem. From looking at this user's page and writings, it appears that he may actually think he has to sign up as a participant in a WikiProject to make use of the articles. He mentions his research and doesn't generally seem to be going anything in bad faith. This may just be an opportunity for communication and understanding. <font color="3300FF">] (])</font> 22:53, 22 May 2006 (UTC) | |||
::Maybe he's the ] type. <font color="green">♥</font> ]<font color="green">♥</font> 03:56, 24 May 2006 (UTC) | |||
== Signature limitations == | |||
Some editing had been occuring at ] as well as a straw poll that would strengthen the limitations on length and content of signatures. This is currently being driven by a few editors, however as this is an issue which effects a very large number of users, wider input would be desirable. - ]<span class="plainlinks"></span> 08:06, 23 May 2006 (UTC) | |||
== Factions of Nationalism: proposal to move ] → ] == | |||
See ]. Alas, but:<blockquote>'''I get the creepy feeling this is a test case for the Polish Cabal how far they can go in bending wikipedia their way'''. Note that Piotrus' argument regarding the dictionary resumes to: look how successful we've been thus far in replacing "Polish Biographical Dictionary" by "Polski Słownik Biograficzny" in many wikipedia articles (which is an unacceptable ] argument). It has been amply demonstrated by me that the English version "Polish Biographical Dictionary" is commonly used in *external sources* to refer to this multivolume dictionary, and not to the other, one-volume, one (see talk in ]).</blockquote>Sorry, don't want to offend people doing hard work in WikiProjects on specific topics (like the ]). | |||
Anyway, didn't surprise me a bit that Piotrus (the initiator of the vote above) opposes the new ] proposal (), as far as I can see ''entirely for the wrong reasons''. --] 08:51, 23 May 2006 (UTC) | |||
:Um, I'm Swedish and not part of the Polish Cabal<sup>TM</sup>, but I think it is just common sense and sound bibliographical practice to keep books (as well as other publications) at their actual title rather than inventing new titles in English for them. | |||
:Despite the claims from Francis Schonken, I would disagree that it has been demonstrated that '"Polish Biographical Dictionary" is commonly used in external sources'. There are a couple of examples of loose references to this work in that way, but many more refer to it by its Polish title, and in a bibliography or footnote reference that would be the only correct way. This is really a very flimsy and biased argument. Francis just gives much greater weight to the few examples that correspond to his own view. | |||
:This is not one of those literary classics that has been translated numerous times, published in numerous editions and known by an English title for hundreds of years. It is a contemporary Polish reference work, published in Poland, written entirely in Polish, and with only a Polish title. Anyone looking for information on this is more likely to look for it under its Polish title. That is, at least, the only title under which it can be found in the catalogues of the ] or the ]. ] 09:41, 23 May 2006 (UTC) | |||
::] is clear on the issue, *especially* for those books that haven't been published in English (even if there would be no "standard translation" of the title, which was the case for some years regarding '']''), the wikipedia content page is at an English version of the title. I mean: '']'' is a redirect to something understood by an average English speaker, isn't it? --] 09:57, 23 May 2006 (UTC) | |||
:::As far as I can see, you have written that convention yourself. It has not been widely discussed, it is not really in accordance with actual practice when it comes to publications with foreign titles (the cases mentioned by you are your own, just as the "guideline"), and to most people this is probably a non-issue, as it is (as I wrote above) just sound and normal bibliographic practice to refer to a publication by its actual title, not an invented one. ] 10:07, 23 May 2006 (UTC) | |||
::Well, I don't know what your point is. Yes, I wrote large parts of ], apart from copying bits and pieces I had found elsewhere... that's what I do, I sometimes write guideline proposals. Then, the proposal was widely advertised, mentioned several times here on this Village pump page, at ], ], etc, etc. And then it was discussed in several places: primarily at ], there was a WP:RM vote *specifically* drawn in these discusions, etc, etc. | |||
::The section of that guideline regarding ''translation of book titles'', was not only discussed on the talk page of the guideline (proposal), but specifically *also* at ], in the "proposal" stage of the ''books'' NC. | |||
::Don't know where you see a problem? If there's consensus for a different formulation of the ''books'' NC, I'd happily oblige. But mind you, I'd rather put a halt to what I, and some other wikipedians, consider to be ''factionalism'', first. --] 10:45, 23 May 2006 (UTC) | |||
While I appreciate that this thread will bring more needed comments to the move page, and perhaps to the naming guideline itself, I don't appreciate Francis reposting his statements, bordering on ]s, about me being a Polish nationalist, member of some Polish Cabal<sup>TM</sup> and editing Misplaced Pages articles to swing the vote, especially considering that I could have just moved the page like Francis did in the past instead of listing it on RM to let the community voice its opinion (not for the first time, as I have listed that issue on RfC some time ago, too). I asked Francis to explain his accusations on the article's talk page and await his reply there. That aside, I think ] is a useful tool, and I believe it was advertised widely enough (many of those guidelines don't attract attention no matter what the creator tries...). Nonetheless the policy is not clear what to do when the only sensible English title is the same as the title of an already existing English publication. Given the choice between making creating a disambig and moving the article to a completly fictional title (like ] as one of the discutants have suggested) and using the Polish name which is used by the majority of academic publications (Google Print test), I think the solution is simple.--] <sup><font color="green">]</font></sup> 20:44, 23 May 2006 (UTC) | |||
:''']''' - I formally deny | |||
:*This remark was not offensive: "Please comment at ] '''to stop this monstrosity from happening'''." | |||
:*It was never posted on the Polish' Noticeboard talk page | |||
:*It was never posted there by Piotrus | |||
:*And it was not about a WP:RM vote initiated by Francis | |||
:], There Is No Cabal. | |||
:Re. "''Nonetheless the policy is not clear what to do when the only sensible English title is the same as the title of an already existing English publication.''" - pardon? I don't see why this should be treated different than ]/] if further disambiguation is needed (examples given in ]). And FYI ] is not a ''policy'', but a ''guideline''. But the guideline is not "unclear" about how to disambiguate two books that ''in English'' would have the same title. --] 09:58, 24 May 2006 (UTC) | |||
::I am not sure what is your point in bringing up this old RM, other then to illustrate how your proposal ], despite it being listed on ], which the last time I checked the Polish Cabal<sup>TM</sup> didn't mange to take over just yet :) As for your policy arguments, Tupsharru has ] that 'Polski Słownik Biograficzny' is indeed the most common English name for this publication, as it is used by almost 90% of academic, English language publications. Therefore according to ] we should use the name 90% of people who find it in those sources would type in the search engine (i.e. the original Polish title).--] <sup><font color="green">]</font></sup> 17:31, 24 May 2006 (UTC) | |||
Something else: you know where there's a difference between ''factionalism'' and a non-factionalist approach? | |||
We have (for example): | |||
* ] | |||
And we have (for example): | |||
* ] | |||
The difference is that in the first case listing on the page is open to anyone, e.g. a ''heterosexual'' without any particular commitment to ''write'' LGBT articles, but committing him/herself to keeping the development of LGBT topics on en:wikipedia checked can list him/herself. The only condition/criterion is that you're "interested" and "active" (without any indicated qualification in what domain you're supposed to be "active" as a wikipedian), see ] | |||
In the second case the notice board is (by it's very name) limited to wikipedians that are ''Polish''. There are no names listed of wikipedians interested in ''Polish topics'' (no names are listed on the notice board page, see below), but in all clarity, the thing is managed by people with a pro-Polish POV. And if you're not Polish, you need at least to write ''Poland-related articles'' (see intro of the page), or you're supposed even not to have business looking at the page. | |||
So I recommend to ''rename'' ] to ] (which could be done by WP:RM if we don't establish consensus here). I'd prefer not to use ] while that might create misunderstandings with "]", the language. And rewrite the intro, making clear the page is for anyone ''interested in Polish topics''. | |||
Further I'd recommend to allow wikipedians to list their name on the page (like for any usual WikiProject-like page). And leave it up to those users whether they qualify their listing on the page with something like "not Polish, but interested", or just put their name, whatever their provenance or ability to speak a foreign language. | |||
I think that the ''List of Polish Wikipedians'' (which on the notice board is a link to ] - meaning that nor ''learning to speak Polish outside Poland'', nor ''living abroad from a Polish descendance'', nor ''being a professor in Polish history at a foreign university'', etc, are sufficient to declare membership) should no longer be used as mechanism to list interest. After all, one might live in Poland, and be more interested in the LGBT notice board, than in a notice board on Polish topics... Listing oneself as a "Wikipedian from Poland" or as a "]" is entirely something different than listing on a notice board on a topic one is interested in. It's better to keep them separated IMHO, while I think not separating "interests" from "de facto membership criteria" is fostering ''factionalism''. | |||
So, that are some small steps I recommend towards a less factionalist approach. --] 12:00, 24 May 2006 (UTC) | |||
:Actually I would not be averse to changing the name to ], as this would more clearly represent the spirit of the noticeboard, but I am not convinced it s really worth the trouble. The original name was chosen without much thought following the unofficial naming scheme of ]. As majority of the noticeboards have names constructed on Misplaced Pages:Country's adjective notice board (and therefore we have the ], ], and such) I'd think you may want to propose a general renaming, however I would like to note that the same naming problem seems to apply to your favourite exmaple, i.e. the ], which name can also be overinterpreted and taken that it implies a closed membership. | |||
:Of course that noticeboard clearly states that it is an open board for all Wikipedians interested in those issues, just as the Polish Wikipedian's notice board states in the very begining of the page that: "This is a page to function as a notice board for things that are particularly relevant to Polish Wikipedians and those who are writing Poland-related articles". I would think this is clear enough to avoid any confusion, but of course if you think this is not the case, ] and improve our intro. | |||
:Third, please note that ]. | |||
:Fourth, note that the ] is an obsolete relic of the old dates, and it is being replaced with categories and userboxes by ] project. I have no problem if somebody wants to create categories for Wikipedians 1) living in a given place 2) of certain origin 3) interestd in something, so if there are people who want to create relevant lists/categories/userboxes, go right ahead. You may certainly propose that or changes to it or whatever you feel is appopriate at our noticeboard, here, or wherever you fill it should be done, but I think this is somewhat OT here. | |||
:Finally, it is my personal belief that the entire factionalism issue is a moot waste of time. Factionalism has existed, exists, and will exist, no matter how hard one tries to deny it. Forbidding people to express their allegiance to factions, which often cannot be really separated from interests, will only foster more trouble then it is worth, as the recent ] have shown. ] is all we need to deal with it, and all good faithed attempts to improve it result in ] and waste of time, as I personally would have much prefered to write another article for Wiki instead of defending myself and the tool we use (noticeboard) from your suspiscions here. | |||
:Over and out. This was a sponsored message from the Polish Cabal<sup>TM</sup>--] <sup><font color="green">]</font></sup> 18:05, 24 May 2006 (UTC) | |||
:: Regarding the renaming discussion at ]. In my experience, native Polish speakers are organized via their noticeboard here on Misplaced Pages, and they do routine "calls to action" to get their members voting on these kinds of issues around the English-language Misplaced Pages, trying to make it appear that there is a general all-community consensus to rename articles to their Polish-language titles, when in reality the consensus is coming from people who are clearly biased: Polish-speakers. As regards the particular renaming, my feeling is this: Misplaced Pages guidelines are to Use English (]). "Polski Slownik Biograficzny" is not English. "Polish Biographical Dictionary" ''is'' English. Even at the English-language page of the Academy that ''writes the book'', they refer to the translated name as "Polish Biographical Dictionary". These all make a clear case to me, that the correct vote is to Oppose a move of the page to a Polish name. | |||
::I wouldn't go so far as to call this Polish voting block a "cabal", but I do think it is inappropriate, especially when there is a Polish-speaking admin in their number who is both forcing the issue by starting the polls in the first place, calling his companions to vote on the matter, ''and then declaring himself as the proper authority to make the decision on the consensus.'' To me, it seems a clear conflict of interest, and inappropriate behavior for an admin. --] 18:40, 24 May 2006 (UTC) | |||
:::Regarding your official name argument, it has been rebutted at the linked page by ], who has no connections to the Polish NB, I'd recommend you reply there to his argument instead of repeating it here. | |||
:::As for the 'Polish voting block' or whatver new name you give it this time, I think is the best reply. You may want to voice your opionion at ]; I resent accusations that I have ever done anything unethical with regards to the voting. Issues such as names of Poland-related articles are obviously of interest to editors interested in Poland-related matters, form whom the noticeboard is a natural place to find, well, notices posted about matters they are interested in. | |||
:::I find it somewhat amusing that I am being accused by you of doing wrong when I am 'starting the polls' (above) and also when I 'makes changes to Poland-related articles without consensus' (]). It does seem that I am in the wrong whether I ] or whether I ask community for its input. | |||
:::I would still like to see examples of where I am 'declaring himself as the proper authority to make the decision on the consensus' in a vote evidently rigged by me and 'my companions'. I was not aware I suffered from ], but please show me where I have erred so. I will admit I am getting ] by this wave of unbacked accusations about me and other editors from the PWNB acting in ], especially when people prefer to invent some cabal conspiracy theory to explain why they have lost a vote instead of using an ] and considering for a brief second that maybe, just maybe, the vote was fair, and it was their proposal that simply proved unattractive to the community.--] <sup><font color="green">]</font></sup> 21:12, 24 May 2006 (UTC) | |||
;Tupsharru's alleged search results are unreliable : This is what I found: | |||
:* '''Polski Słownik Biograficzny''': '''zero''' results at Google Books (note that the proposal is to move to this name and not to "polski slownik biograficzny") | |||
:* '''polski slownik biograficzny''' yields '''187''' results at Google Books (), an overwhelming part of which are links to texts in ] (e.g. ). Since Google books has no functionality to separate "English" search results from search results in the Polish language, such search results should be rejected (unless sorted ''manually'', which didn't happen). | |||
:Note however that Tupsharru's search '''supports''' that at Google Books '''in 96% of the cases''' "Polish Biographical Dictionary" refers to the multi-volume dictionary, and not to Sokol's one-volume dictionary. | |||
;Tupsharru's alleged search results are irrelevant : For books not published in English, and for which the title would usually not be understood by English speakers ] is clear that the title should be translated. In other words, if for such books the ] and the ] would lead to different results (which isn't even proven in this case), ] gives precedence to the ''translated title''. | |||
;Piotrus apparently sanctioned something he hadn't checked : "I think that we can safely conclude now that 'Polski Słownik Biograficzny' is the most common English name for the book" ... is a ''nonsensical'' allegation, while at this point not supported by solid evidence. --] 07:32, 25 May 2006 (UTC) | |||
:It's nice that you finally address the content issue with some date instead of making unfounded accusations, although I am not sure what do you mean by me 'sanctioning' something. T. results looked good, and I tend to assume good faith and don't double check every statement somebody posts. I'll now wait and see what T. has to say in defense of his arguments.--] <sup><font color="green">]</font></sup> 00:39, 26 May 2006 (UTC) | |||
::Anyone is welcome to check the Google Books results for themselves. As for the number of hits, I obviously wouldn't claim that I got 203 hits if I didn't actually get 203 hits. When I looked at those 203 hits yesterday, it actually came down to 94 different publications (books or journal issues), of which thirty-something were in English. Thirty-two, I think, but ] counts thirty-seven, see discussion on talkpage. Today I get 228 hits in 103 publications. Why the results vary is beyond me. | |||
::Focus should obviously be on the English-language publications., and the fact remains that ''of those English-language publications which do use a translation of the title, an overwhelming majority use it as a translation in brackets or parantheses, together with the Polish name.'' (Yesterday) I found '''thirty-two''' English-language books referring to the PSB under its original title. '''Nine''' publications use an English translation of the title, but of those nine, seven use the Polish title, with the English translation added just to explain what kind of a publication this is. One uses the Polish title once together with the translation, then refers to the dictionary in English a number of times. Only one book uses the English translation on its own without mentioning the Polish title. | |||
::As for these results being "irrelevant" because of the existing guideline ], Francis Schonken has it backwards. Guidelines not supported by consensus are irrelevant. As I have already pointed out, this guideline is mostly written by Francis Schonken as late as January this year. He wrote it at about the same time we had this discussion last time over exactly the same article (during which Elonka moved it from the original Polish title twice, despite resistence from others and with no attempt to first gain consensus for that move). I think it should have been clear to Francis Schonken when he composed the guideline that there was no consensus on this issue. This very discussion shows that there is no consensus now either. Francis Schonken can dismiss other people's conclusions and views as "irrelevant" as much as he wants, but if the guideline isn't supported by consensus (nor, in this case by common sense or the normal bibliographic practice to refer to publications by their real titles, not loose translations), the guideline needs to be changed. ] 08:21, 26 May 2006 (UTC) | |||
== Notability of schools == | |||
Can anyone point me to any guidelines concerning the notability requirements for schools? Specifically, I'm wondering if we are to allow articles for every elementary school & high school in the US? That seems to be the precedent, but I would like to know if that is laid out anywhere. --] (]) 14:09, 23 May 2006 (UTC) | |||
:"Block voting" is the term your looking for, as opposed to a consensus created guideline. The best thing is to quietly merge them into something other than a sub-sub-stub, but expect violent opposition if you're discovered. - ]<span class="plainlinks"> </span> 14:13, 23 May 2006 (UTC) | |||
::Right, if it's on a public school in the U.S. at least, and there isn't enough substantive information to justify an independent article, merge it into an article on its parent school district. People can make as much noise as they want about whether schools are notable, but that's irrelevant as to whether merging is justified by the lack of content. ] 14:47, 23 May 2006 (UTC) | |||
:::I could've sworn we'd at least gotten it to nothing below high schools. ] ] 15:01, 23 May 2006 (UTC) | |||
::::That's would I would think (or hope), but was looking for backup. I really don't see why most elementary schools need articles. --] (]) 15:29, 23 May 2006 (UTC) | |||
:::::Obviously, we do keep almost all verifiable real elementary schools (take a look at AFD precedent for the last year). They have been found notable. As people said, for articles lacking content, where appropriate you may do merges. Of courses, 99% of the people who claim to support merges, never bother to do any, or if they do them, they do it in a haphazard manner. For instance, somebody merges to a talk page, or a state-level-list, and then cries when its undone. You'll find that there are a fair number of people interested in editing individual school articles, but pretty much nobody interested in editing school district articles (the usual merge targets). --] 17:08, 23 May 2006 (UTC) | |||
::::::Only as a result of block voting are we keeping many schools. The proposed guideline was not able to gather consensus supporting it. As to merging, yes, that is very acceptable and I have done some. The problem is that many editors no longer care about the school articles. As a result, basically no one is taking the time to do merges when that it the right action. ] 20:32, 23 May 2006 (UTC) | |||
:::::::Can you give some examples of merges you've done (here or ])? --] 04:26, 24 May 2006 (UTC) | |||
::::::::I know you weren't asking me, but some examples of merges I've done were to create ] as a home for the one sentence substub, ; and ] as a home for ]. This latter merging was undone, despite the high school's article containing no information not true of the district as a whole aside from the name of its principal (which is of no encyclopedic import). I note that you yourself undid my merge of ] to ], despite the fact that the district article after the content was merged was itself a stub and easily incorporated what little independent content the high school article had. I'd be inclined to work on more school district articles if I thought it was worth the bother, which I unfortunately don't at this point. ] 04:49, 24 May 2006 (UTC) | |||
:::::::::Let's see. You made the absurd decision to merge a school with multiple alum into a district. You wanted to toss alum from different schools together, not indicating if they went to a particular school. You indicated it was your plan to do this with other schools, essentially losing information of who was an alum of what school. I reverted Cardinal, but later , which you opted to ignore. Since you wish to go back to November 2005, note I helped with a merge in December, listed . So, please don't paint me as a blocker of all merges. I hoped people could list redirects in merges in a central spot, and work together, but that never panned out. --] 07:33, 25 May 2006 (UTC) | |||
:::::::That's a highly misleading statement. The school deletionists were soundly defeated. You have no grounds to regard "keep" votes as somehow less-meaningful "block voting" at the same time as somehow seeing each "delete" vote as superior. The last version of the schools proposal was effectively a "keep all" because it set conditions which any nominated article was certain to be raised to during the debate whatever condition it was in when nominated. Thus it was completely pointless as it was useless to deletionists. But even that version was rejected because the clear majority favour keeping school articles. Pretending otherwise to yourself is self-delusion; implying otherwise to others is a serious misrepresentation. ] 15:16, 24 May 2006 (UTC) | |||
: Last I checked, high schools were virtually undeletable, elementary rarely deletable, and even infant schools, especially in say the UK where there's lots of verifiable information on them, are often kept at AfD. The talk of "block voting" is, I suspect, a matter of sour grapes from the few who adamantly opposed school articles for a long time. --] 04:51, 24 May 2006 (UTC) | |||
::You could make that claim but the existance of school watch suggests otherwise.] 15:55, 24 May 2006 (UTC) | |||
On the high school issue, a reasonably good article usually brings overwhelming keep results on AFD. There is no consensus on the elementary school issue, and as long as the article is reasonably good, it is unlikely that it will be deleted. Actually, I don't know if an article has to be reasonably good even, take a look at for an example of what I mean. ] ] 14:21, 24 May 2006 (UTC) | |||
: You were right btw, the things people will keep.... ] ] 15:30, 24 May 2006 (UTC) | |||
== Accessibility for blind users == | |||
As a result of an RfC, I stumbled into an edit war at ], where I found some "click on a name to read an article about..."-type comments, which I removed, partly following ], but mostly because I felt at the time that it was a downright unnecessary and silly thing to say on the internet: where a person incapable of realising you click the links is unlikely to have gotten here in the first place. | |||
Anyway, my edit was reverted by ], whose comments on the article's talk page, and my own talk page, indicate that he is blind, and that he finds headers of that kind useful on his browser. Several users have pointed out that there is no great merit in editing a single article for greater accessibility by the blind, and the suggestion was made to bring the matter to the Village Pump, which I am now doing. Here are some preliminary questions I have, although I invite comments on the subject generally: | |||
#Is there already an accessibility policy on Misplaced Pages/Wikimedia? If so, where is it? | |||
#Is there a group, a category or a project for blind Wikipedians, and if so, where's their page? | |||
#Are self-references genuinely useful to blind users, and if so, in what way? If yes, can/should they be used more widely in Misplaced Pages? | |||
#Is there anything further individual users, or the project, can/should be doing to assist accessibility? ] 20:05, 23 May 2006 (UTC) | |||
'''PS''' There is a substantial (if rather rambling) discussion to be found here: ]. ] 20:18, 23 May 2006 (UTC) | |||
:I only know of ] and ]. ] ] 20:39, 23 May 2006 (UTC) | |||
::On a related note, while following the links above (tnx!) I stumbled upon a seemingly forgotten but very important proposal: ]. I don't think I am the only one who finds current implementation of tables in Wiki to be less then perfect. If you agree with this, plese endorse the proposal and .--] <sup><font color="green">]</font></sup> 17:11, 24 May 2006 (UTC) | |||
:::As a blind wikipedia user, I would not find those link explanations useful; I would find them distracting. All modern screen readers clearly alert a user when they have found a link on a webpage, and all blind internet users should know how to activate links. With ]s (which I use, this is very clear. With a ], which is what the user seems to be using, this may not be so clear; from his comments I believe he needs to become more familiar with the technology he is using, or upgrade it. I've given most of my thoughts about accessibility on wikipedia in ] and ]. Basically text and link descriptions should be concise and well-written, and there should be a clear and consistent article structure to give the site a consistent look and feel. There isn't a category or a WikiProject for blind wikipedians, simply because there aren't that many of us. I know of only two other contributors who say they are blind or use a screen reader on their user page. However, I have a couple of blind friends who have edited here anonymously. ] ] 12:50, 27 May 2006 (UTC) | |||
One method used by screen reader software to identify links is to change the gender of the voice. For example, plain text might be read in a male voice and links in a female voice. ] 13:30, 27 May 2006 (UTC) | |||
== A Cautionary Tale == | |||
In ], somebody opined that there was a . The text in question was from the ] page. Turns out it was originally . That is, the cited page is actually a copy '''of Misplaced Pages''' from several months later than the original section! | |||
Remember, folks cite Misplaced Pages without attribution. The fact that Google finds them does not make a good test for copyvio. | |||
:--] 20:18, 23 May 2006 (UTC) | |||
:A useful thing to remember, actually. Somebody deleted the history section of the ] article, saying it was a copyvio of the . In actual fact, I wrote much of the section for the Misplaced Pages article and I therefore know that the RA nicked it from us. Don't always assume that even official sites are the originals. It shows where people turn these days if they want information though! -- ] 23:09, 23 May 2006 (UTC) | |||
::It can sometimes be difficult, however (especially when the dates of writing are not cited properly in the external source) to prove which way the information bleed goes. We don't, after all, want to be sued by a breakfast cereal company for copyright infringement (excuse the oblique reference please!). ] 23:15, 23 May 2006 (UTC) | |||
:::Naturally it can. But there is a (wholly natural and understandable) tendency to automatically assume that official sites are the originals, which is not necessarily the truth. It's just useful to keep an open mind. -- ] 23:38, 23 May 2006 (UTC) | |||
:::Agreed. I'm surprised that Royal Artillery would copy the article, though. Some claim we're unreliable, but we're good enough that people (Royal Artillery, journalists, companies) take our content and try to pass it off as their own. -- ] 05:54, 24 May 2006 (UTC) | |||
::: Oblique reference excused, 42! ] 06:20, 26 May 2006 (UTC) | |||
:My rule of thumb is to look at the original version that was added to Misplaced Pages. If it's unwikified, or if it's fairly old and an exact match for the other text, the other text was probably the original. --] 06:16, 24 May 2006 (UTC) | |||
::Yes, unwikified and/or written in a very pompous, sycophantic, self-promotional, flowery or old-fashioned style - usually very good indications of a copyvio. -- ] 12:39, 24 May 2006 (UTC) | |||
:Another good way to identify a copyright violation it is if the user who wrote it is not an "established" user and has very few other contributions. Also, reused Misplaced Pages content is often easy to identify by the lead-in, which will in a typical Misplaced Pages style mention the article title prominently and establish a context. Also keep in mind that the short-term consequences of copyvio are ''not'' as severe as you think, because ] forces copyright holders to ask websites politely to remove content before they sue. ] 20:33, 24 May 2006 (UTC) | |||
:Another test – that is suggestive, but not definitive – is to check the ] at http://www.archive.org. Drop the URL of the putative source of the copyvio into the Wayback Machine link, and click 'Take Me Back'. You will be able to see versions of the page from various time points in the past; there will be an entry for every time the Archive spidered the page. If a page's content appears in the Archive either before or long after it appeared on Misplaced Pages, one may be able to draw an inference about who copied whom. Please note that this is not infallible—the Archive doesn't spider all pages, it may take a significant amount of time for the Archive to find a page in the first place, it doesn't help you if the external page has been moved or renamed, and it doesn't tell you if both pages were cribbed from a third source (web or paper). ](]) 14:54, 27 May 2006 (UTC) | |||
== Mass nominations on AfD / separate discussion page == | |||
Recently much fun was had when during stub cleaning someone nominated a bunch of Jewish summer camps together with a bunch of Hasidic rebbes on AfD. | |||
When someone has doubts on the merits of a particular article he can always go to the talk page before the entry ends up on AfD. Besides, AfD isn't a substitute for the {{tl|cleanup}} tag. | |||
But where does one go when a whole class of articles look doubtful? It could be useful to set up a page where groups of articles can be discussed, to find out if they should be merged, improved or maybe thrown out altogether. | |||
Such a page might also reduce the hostility on AfD when a group of articles ends up there; if large amounts of "cruft" are there there are the inevitable calls of "keep all bits of cruft", even if it's really crummy cruft. ] 22:13, 23 May 2006 (UTC) | |||
== Another "fair use" question == | |||
], some kind of character from a cartoon, has what I consider a pretty excessive amount of fair use and/or completely unsourced images. '''Thirty-eight''' screenshots, to be exact. Does that seem a bit much for the purposes of illustrating a cartoon character? This situation was (accidentally) brought to my attention by ], who wanted my help putting yet ''another'' image in the article (his was a little movie). This editor has had a lot of problems in the past, leading him to several blocks and an RfC from me, so I'm hoping someone else here can weigh in on that article and make the changes that need to be made (if, in fact, I am correct here). It might be better if it doesn't come from me. If anyone here disagrees, and thinks 38 pics is a reasonable amount, please let me know. Thanks! ] 04:11, 24 May 2006 (UTC) | |||
:I think the article overall fails to qualify as fair use, both in its use of fiction and the images. It mostly just gives an abridged version of the character's story without really commenting on it or transforming it by adding a factual, real world context. That's the real danger of fancruft—not simply that it is worthless to an encyclopedia, but that it is a copyright infringement by being merely derivative of fiction rather than informational and analytical. ] 04:18, 24 May 2006 (UTC) | |||
::Too many screenshots, I agree. However on the broader point of it's general existence, I'm not sure. Perhaps you have a better understanding of copyright law on this issue than I do, but I've never been sure where to draw the line on transformative uses. The character is apparently based primarily on some 59 episodes of an animated cartoon. At face value I would say that taking a cartoon and producing an encyclopedia style article describing one of its key characters is at least somewhat transformative. I doubt, for example, that the cartoon laid out things in the same way or bothered with such systematic descriptions. I also don't see it doing much harm to the anime market, as the article is probably not much of a replacement for the experience of seeing the show (and 59 episodes is a lot of show). If it were originally written material, I'd probably feel differently, but in my mind there are enough qualitative differences in the change of format and media that I could imagine at least being able to argue the point. ] 05:21, 24 May 2006 (UTC) | |||
:::I'm with Dragons flight. Too many screenshots (although you could make a case for keeping all those directly depicting some kind of "special ability", since those do add to the informative value), but the article itself is quite transformative and is certainly nothing resembling a substitute for dozens of episodes of a cartoon. —] (] • ]) 05:38, 24 May 2006 (UTC) | |||
:::You really said something when you used the phrase "encyclopedia ''style''," because that's all it is for the most part. To retell a story is to make use of its copyrighted expression, even if you use none of the original specific language. So there must be a fair use rationale for that retelling. However, nothing transformative occurs when someone just summarizes fiction by giving a play-by-play history of what happens, even if that summary is in a different medium than the original (e.g., from cartoon to writing). At best, the result is just a condensed or abridged ''derivative'' of the original work. Instead, the fiction needs to be placed in a proper factual context and objectively described (i.e., from a real world perspective). The more an article looks like it came from a fan reference guide (of the kind that the original authors of the fiction would themselves market), the less of a claim to fair use there is for retelling the fiction. The more an article looks like the work of a cultural historian, the better our fair use justification for making use of the fiction (and associated images). ] 05:09, 25 May 2006 (UTC) | |||
::::Perhaps it would be appropriate to write an additional section in ], explaining the legal boundries of what's acceptable and what's not. I think most people are pretty well aware of the dangers of "straight copying", but not aware of what you're speaking of. --] 05:26, 25 May 2006 (UTC) | |||
:Far too many. I'd delete all but one or two. --] 06:15, 24 May 2006 (UTC) | |||
I've brought this article up before to no avail, and oddly enough was just talking with ] about it this morning. Angr feels (and I think I agree, in so far as I understand the issue) that there's a serious fair use issue with the images. One think that makes it hard to fix is that one editor in particular (not Zarbon) feels an incredible degree of ownership on that article, as you can see from the talk page, and any change or "meddling" is likely to be followed by a great deal of argument. I haven't taken it on because I thought the task of building a consensus to prune on the article's talk page was just too daunting. But I think just pitching in and pruning isn't a good idea either. Perhaps a note on the talk page that it's being discussed here? <b><span style="color: #f33">·]·</span></b> 14:12, 24 May 2006 (UTC) | |||
== Redirects and categories == | |||
Currently, if someone wants to find something on Misplaced Pages, they have four main options as I see it: | |||
*Perform an ''external'' '''search''' (eg. Google) | |||
*Peform an ''internal'' '''search''' or try typing the article name direct | |||
*'''Browse''' for a ''portal'' and browse that portal | |||
*'''Browse''' the ''category'' system | |||
The first two methods work well when you have a well-defined subject with a well-known name or that has well-known search terms associated with it. Those methods are not so good when you are not sure exactly what you are after, but you know the subject area - which is where browsing comes in. Which brings me on to redirects. Currently, while browsing in main article namespace, if you see something that looks like what you are after, you can click on it, ''even if it is a redirect''. Thus technical terms can be phrased differently, in more familiar terms, making it more likely that people will recognise something and click on the link. Sometimes an editor will want to use a different phrase, but still point at a certain article, ''even if the article name is clumsy'' (for example, if it has disambiguation parentheses). This can be done by piping (using the "|" trick to hide the article name behind what you want the reader to see). | |||
This makes browsing main article namespace and following links very intuitive and easy. A big problem, and I'm almost certain this has been raised before, is that ''you cannot do this in categories''. There is no control over how an article name displays in category space. At the moment, the only way to get a genuinely alternative name to appear in category space is to categorise the relevant redirects, and, unfortunately, I don't see many people doing that, though I think it should be encouraged. | |||
Finally, is there likely to be "piping" in category space any time soon? ] 17:00, 24 May 2006 (UTC) | |||
:Categories are but one way to organize information in Misplaced Pages. Lists and navigational boxes are others. They each have their own advantages and disadvantages. Please see ]. I don't think there's a policy on this yet (it recently came up at ]), but IMO categorizing redirects that are spelling or phrasing variants seems like a bad idea. There have been various proposals for extending the category syntax to include presentation information in the category reference. I suspect this is not very high on the developers' list of enhancements. BTW - another powerful way to find things is with "what links here". If you know something reasonably obscure that an article you want to find should link to, go to the obscure thing and see what links to it. -- ] <small>(])</small> 04:21, 25 May 2006 (UTC) | |||
== Disadvantages of piping == | |||
Is there any way to prevent vandals from hiding an unsuitable article name behind a piped link? Like a link to ]? Is the "hover" tag and the link name being shown at bottom left (in some browsers) the only way to check this sort of thing? Is there a way to turn off piping if you want to check an article for this sort of thing? ] 17:05, 24 May 2006 (UTC) | |||
:The simplest way is to read the wiki source instead of the rendered page. This also makes minor editing more convenient, but isn't so great for reading tables and images. Another way would be to download a recent database dump and load it into a suitably modified copy of Mediawiki. In short, no, not really. ] 20:22, 24 May 2006 (UTC) | |||
== Continued from ] and ] == | |||
<!-- 00:10, 25 May 2006 (UTC) --> | |||
"Yet it's not _identical_ (think for example of -ise and -ize. -ize is as rare as hen's teeth in Australia, but is endorsed by many sources in England; British Oxford dictionaries prioritize -ize, but the Australian Oxford Dictionary has -ise as the main form; program is more used in Australia than in Britain; etc.;" I was aware all along that Britsh spelling (I read this on ]) allowed for both the -ise ''and'' -ize eidings after haiving read that, however I ommitted to mention it all along for the simple reason that most ]s (I was one of them before I read it in that article) think of -ize as "American" spelling, I appricate JackLumber for mentioning the ] after all that time Jack had tried to tell me about "British" and "Australian" "spelling differences" but gave me no evidence until then. I have been wondering about the origens of the -ise spelling, could it somehow have come from ]? Could -ise also have become as "rare as hen's teeth" in Australia as a result of Germanisation (although this leaves unexplained why it didn't happen in the ]) too? I have notced a mistake in the table, the word 'fiord' is not used in ], they call them by the Maori term for them which I can only half-remember. The other one is that spellings of the word 'manoeuvre/maneuver/manoeuver' are given in the UK/US/CA columns but not in others (we do use that word here in ], I wonder what the ] equivelet term is), and according to that column there 'is' such a thing as "Commonwealth spelling" so Jack was wrong to call it an incorrect term, also according to that column, "British" spelling is also utilised for writing ], so "British spelling" does turn out to be an ] term after all, but it is an ] term. ]:-) | |||
The reason I added this here is because I tried doing this on ]'s ] but Doco removed it for no apparant reason. ]:-( | |||
* As far as I know -ise is (for the most part) the result of French influence, not German. ] 02:52, 29 May 2006 (UTC) | |||
* ''-ise'' derives from ] influences following ]'s 1066 invasion of Britain (see ]). It was a modification of Latin ''-izāre'', according to . —] (] • ]) 04:59, 29 May 2006 (UTC) | |||
::Anyways, the British (and later Australian) trend toward ''ise'' is way more recent. From the ]: ...'' in modern French the suffix has become -iser, alike in words from Greek, as baptiser, évangéliser, organiser, and those formed after them from Latin, as civiliser, cicatriser, humaniser. Hence, some have used the spelling -ise in English, as in French, for all these words, and some prefer -ise in words formed in French or English from Latin elements, retaining -ize for those of Greek composition. But the suffix itself, whatever the element to which it is added, is in its origin the Gr. -izein, L. -izare; and, as the pronunciation is also with z, there is no reason why in English the special French spelling should be followed, in opposition to that which is at once etymological and phonetic. In this Dictionary the termination is uniformly written -ize.'' ], 14:39, 29 May 2006 (UTC) | |||
::Myrtone, the MoS is wrong. There's no Commonwealth authority estabilishing a "Commonwealth" spelling; strictly speaking, each Commonwealth nation has its own spelling system. In fact, the "Commonwealth" column is merely a restatement of the British column, and is additionally unsourced. The Philippines and Liberia don't follow British spelling, nor are they Commonwealth countries--yet English is an official language there. ], 14:45, 29 May 2006 (UTC) | |||
"the MoS is wrong. There's no Commonwealth authority estabilishing a "Commonwealth" spelling; strictly speaking, each Commonwealth nation has its own spelling system." Then why does it have it in the first place, then again, why hasn't anyone removed it? And the reason I put it on Doco's talkpage is because I wanted a comment from a Non-native English speaker, I have now put it ]. In this whole discusssion, I have not had one comment from a non-US wikipedian, let alone a non-UK/US user. ]:-( | |||
== Admin function == | |||
I find there is a problem. The best and brightest Wikipedians are the admins. However, before this some of them were even ordinary users. My point is that the main function of Misplaced Pages is to write articles, or am I wrong? The best authors are promoted to be admins. At that stage, they change function. They then become like, well, Military Police officers, and the senior ones Judges tracking and punishing small time vandals. After the best authors have changed function, the vandals will all be gone, but the articles will not be as good. Is this really the way forward? ] 18:40, 24 May 2006 (UTC) | |||
:That's an interesting point but I don't think you're right. For example, I've been in admin for a few years, but I still spend most my time contributing to articles rather than fighting vandals, resolving conflicts, or patrolling recent changes. To some extent, there is an expectation that admins will perform administrative functions just because they have the ability to do so, and somebody has to do them, but I don't feel obligated personally. ] 20:27, 24 May 2006 (UTC) | |||
::Yeh thats interesting but many admins are created because they do exceptional vandal fighting and participate in AfD. --] 22:52, 24 May 2006 (UTC) | |||
:::And some of us simply prefer administrative duties to article creation. Me, for example; I prefer the administrative side to the creative side. --]] 23:14, 24 May 2006 (UTC) | |||
:Due to the way people make descission at RFA most admins come with a background in vandal fighting. The admins that are most active in admin only areas tend to come from the vandal fighter group.] 23:27, 24 May 2006 (UTC) | |||
::If admins come from a background of vandal fighting, how can they be neutral when coming up with a ruling to revert an article's content, NPOV, etc. How can they identify vandals? Do they have a register? Can people ever come off this register? Does every admin think a registered vandal always has a POV in future posts? Is a vandal fighting admin answerable to anyone? ] 15:12, 25 May 2006 (UTC) | |||
:::Would you prefer admins not vandal fight? I'm not understanding the point of your entry here. I can be perfectly neutral when dealing with non-vandal edits. --] 15:14, 25 May 2006 (UTC) | |||
::::If you want my opinion. Then I would like admins to spend more time doing some productive work, rather than chasing vandals, or perceived vandals. What will the vandal chasing admins do when the vandals have all been banned? ] 17:11, 25 May 2006 (UTC) | |||
:::::We will woryy about that if it ever happens. Getting round to properly dealing with all the problems with fair use to start with I supose.] 20:48, 25 May 2006 (UTC) | |||
:::::Many non-admins work very hard to deal with vandals, and many admins do as well. Both groups are necessary, and Misplaced Pages would in fact go to hell without all the hard work they do; you seem to be under the misaprehension that vandal-fighting is just something people do for their own amusement. -- ] 23:31, 25 May 2006 (UTC) | |||
::::::But that's not to say it's not fun! Wallie: You asked if there is a register of vandals, and in a way, there is. First, the talk pages of vandals often have many warnings left by both admins and non-admins alike. You can usually identify a someone who has vandalised before by this. Secondly, there is the block log, where you can see if a user has been blocked before, and usually, this will be because of vandalism. I'm not speaking from the POV of an admin (since I'm not one), but if I see an edit made by an editor who has vandalised before, I'm going to check it a little more carefully than I otherwise might.--<strong>]]<sup>(]/])</sup></strong> 15:01, 28 May 2006 (UTC) | |||
*Hey Wallie -- maybe it would help to look at it this way. I fight vandals so ''you'' don't have to. Also, you ask, ''If admins come from a background of vandal fighting, how can they be neutral when coming up with a ruling to revert an article's content, NPOV, etc?'' They can't, and they don't have to be. We don't demand neutrality from Misplaced Pages editors or admins; we demand neutrality in Misplaced Pages ''articles''. Certainly, few admins are neutral toward vandals -- and for the most part, we know them when we see them, and we see them a lot. The small handful of editors who may or may not be vandals are handled on a case by case basis (and take up WAY more time than the obvious vandals; anyone can tell that editing PENIS into articles not related to penes is vandalism; but it can take a long time to recognize more subtle patterns of disruptive and destructive behaviour.) When all the vandals have been banned, we will be very happy, and we'll invite you to the big snowball fight we'll be participating in Hades. --]] 02:20, 29 May 2006 (UTC) | |||
== Notability of marketing managers == | |||
Here's a cute one. Marketing managers of national brands aren't normally considered notable for Misplaced Pages purposes. If the marketing manager is for a band, does that make them notable? We've been arguing over this at ], who was the manager for ] back when they were famous. | |||
If you go down the list for notability in ] or ], this guy doesn't really qualify. He didn't contribute musically. He's not listed as having won any awards, and there are awards a rock band manager could win. Nobody ever wrote a book about him. He didn't write a book about himself. There's only a little biographical info about him available on line. He was apparently a good band manager and promoter. He did record a record once himself, early in his career, but that apparently made it clear he had no future as a performer. | |||
For other bands, the band's manager usually seems to have an article only if the manager was a musical contributor. ]'s manager isn't even mentioned. The Beatles' ] does have an article, but that's an unusual case; someone is making a movie about him as "The Fifth Beatle". Britney Spears' original manager ] does not. | |||
], the manager of ], does not. | |||
Am I being too harsh here? Someone really wants him to have an article (why a '70s band manager would have a fanatical fan at this late date is puzzling, but whatever). Comments? --] 19:47, 24 May 2006 (UTC) | |||
:So he had a seminal role in creating one of the most succesful and lasting rock bands in the last few decades and is still their manager after nearly 40 years? Provided this is all verifiable, I don't see any problems with the article. ] 20:04, 24 May 2006 (UTC) | |||
:If you don't think it's notable enough to justify inclusion, nominate it for deletion and settle it in AFD discussion. That's the usual context for resolving this kind of thing. ] 20:24, 24 May 2006 (UTC) | |||
::I'm trying to get a sense of the policy before using AfD. It's a close call, and I don't generally start an AfD unless the situation is reasonably clear-cut. Strict enforcement of notability criteria on fancruft really bothers many people. For example, if you take ] literally, at least 80% of the ] articles could be deleted. Yet no one seriously proposes doing that. There's a definite tendency to cut popular culture a lot of slack here. So I'm looking for opinions on where to draw the line. --] 20:50, 24 May 2006 (UTC) | |||
:::You can't strictly enforce notability criteria, because they're just guidelines. Nowhere on ] does it say, nor has anyone ever seriously claimed, that people not meeting the list of criteria are automatically deletable; in fact, a short while ago I highlighted the lines on the WP:BIO page that say the ''exact opposite''. -- ] 20:55, 24 May 2006 (UTC) | |||
:::"''If you take ] literally, at least 80% of the ] articles could be deleted.''" Well, yes, besides the fact that that's completely and totally untrue.--] 19:35, 25 May 2006 (UTC) | |||
:::Inclusion criteria are very fuzzy. They constantly shift and set precedents in new topic areas not addressed before. The best VfD residents have thorough knowledge of these precedents and trends and can apply them to specific examples like yours. That's why I think nominating it is a good idea - not necessarily because it should be deleted, but because it creates a useful venue for discussion that is scoped appropriately and will attract the right people. If you're worried about upsetting the authors just try to take the edge off it in your nomination with some kind words. ] 09:00, 26 May 2006 (UTC) | |||
==Criteria for listing events on ]== | |||
Please look over this proposed policy: ]. This is a proposal I started along with ] and ], as we have encountered issues with the lack of policy in guiding the posting of events as frequent editors in this Wikiproject. Feel free to provide comments and constructive criticism. This has already been up for discussion in ] for almost two weeks. ] 20:56, 24 May 2006 (UTC) | |||
== Offensive comments in Village pump (policy) discussions == | |||
In the ] section above ] stated "the nominator of this article is Jewish and five of the delete votes at least were from Jewish users". How does Arniep know this, and why on earth would it make a difference, or even be relevant to this particular discussion? Can Arniep explain how he calculated this, and why he made this outrageously gratuitous and offensive statement? ]<sup><small><font color="DarkGreen">]</font></small></sup> 21:33, 24 May 2006 (UTC) | |||
:I'm not sure why you started a new thread here. Many many articles related to Palestinians or people or organisations who support Palestinians are consistently treated in a negative way by many Jewish users. The same can be said of Kurdish articles and Turkish users, and to a lesser extent articles related to the Northern Ireland and Yugoslavian conflicts (I'm sure there are other conflicts that also have issues). ] 21:45, 24 May 2006 (UTC) | |||
Arniep, who are the 6 Jewish editors you have mentioned, and how do you know this? ]<sup><small><font color="DarkGreen">]</font></small></sup> 21:52, 24 May 2006 (UTC) | |||
::Many people who voted delete either identify themselves as Jewish or have a strong interest in Jewish articles. I don't think this is a particularly controversial thing to point out given that it consistently happens with any article related to the Palestinians whatsoever. ] 22:05, 24 May 2006 (UTC) | |||
:::Which ones identify themselves as Jewish, and which ones have you simply ]d yourself? ]<sup><small><font color="DarkGreen">]</font></small></sup> 22:08, 24 May 2006 (UTC) | |||
:::: I feel that I need to step forward and claim guilty: I belong to ], as well as to a few other categories. Arniep, are you implying that editors who identify themselves as Jews, Hindus, Kurds, Blacks, gays, females, etc. are by nature unable to objectively cover certain topics? If not, please explain why do you repeatedly bring this up. ←] <sup>]</sup> 22:12, 24 May 2006 (UTC) | |||
:::: Jayjg, I'd like to ask you to be careful here. Asking Arinep to explain clearly what he's alleging is quite legitimate; he does seem to be saying that Jews (or those who edit Jewish articles) tend not to be objective on certain subjects, and that's a problem. But comparisons to European antisemitism aren't necessary. -- ] 22:41, 24 May 2006 (UTC) | |||
:::::Well, I'm a bit concerned about his need to gratuitously try to identify 6 editors as Jews. As far as I know, only a couple of the people on the page in question have actually identified ''themselves'' as Jews. Moreoever, his introduction of that topic was bizzarely unrelated to the topic at hand; if he's upset about the way one editor worded his "delete" vote, why would he have to then start talking about which voters on that page were, in his eyes, Jewish, and which weren't? ]<sup><small><font color="DarkGreen">]</font></small></sup> 22:58, 24 May 2006 (UTC) | |||
::::::I agree 100% with all your concerns. However, I do not think the comparison to historical antisemitism is helpful; it is, rather, inflammatory. We can make it clear that generalizations based on religion/ethnicity are unacceptable without such comparisons. -- ] 23:03, 24 May 2006 (UTC) | |||
:::::::I think his actions were triply offensive; First he "outed" the editors he believes are Jews. Then he made generalizations about them based on his assumption that they are Jews. Finally, when confonted, rather than apologizing he justified his actions. Not to mention the irony of him doing so in a section he started about edits by some other editor which ''he'' found offensive. And I don't think my implied analogy was anywhere near as offensive as his statements and subsequent justifications, which prompted it in the first place. ]<sup><small><font color="DarkGreen">]</font></small></sup> 23:07, 24 May 2006 (UTC) | |||
*Arniep has a point here and I don't think he is trying to be anti-semitic or anything. A Jewish editor is more likely to keep a Jewish related article than say, an atheist editor. THat being said, I may be horribly wrong in my logic, but please be civil. --] 22:50, 24 May 2006 (UTC) | |||
::Disregard the above statement. --] 23:23, 24 May 2006 (UTC) | |||
**What is the purpose in making such generalizations? If you think a specific editor is doing something inappropriate, say so. But saying "Jews have a tendency to do X" has no place on Misplaced Pages. None. None, none, none. -- ] 22:53, 24 May 2006 (UTC) | |||
Arniep stated "the nominator of this article is Jewish" ... As the nominator of the article, I'm not sure why Arniep thinks he knows this since I am careful to not state my ethnicity (or ethnicities), since it is irrelevant. Furthermore, I take offense that he would assume that I (or others) are somehow anti-Palestinian, based solely on alleged ethnicity, or that my proposal to delete a non-notable article that doesn't measure up to Misplaced Pages standards is somehow based on a racist agenda. It's ironic that someone so concerned about another editor being offensive makes comments a hundred times more offensive himself. --]<sup><small>( ] | ])</small></sup> 22:55, 24 May 2006 (UTC) | |||
:Exactly. And even if you and all the others had self-identified as a Jew, what is Arniep suggesting? That votes by Jews should count for less than votes by non-Jews? Should votes by Jews have little identifying marks beside them, so that the closing admin should know to discount them? ]<sup><small><font color="DarkGreen">]</font></small></sup> 23:01, 24 May 2006 (UTC) | |||
::I agree, Arniep's statement is one of the most ignorant comments I have ever seen on wikipedia. Its bad enough to simply hold such views, its just downright insane to actually articulate them and expect people to take you seriously.- ] | ] 23:10, 24 May 2006 (UTC) | |||
:::Sorry but why is it so insane to point out that many Jewish users or users with a strong interest in Jewish articles tend to treat articles related to Palestinians or organizations or people who have shown support for the Palestinians in what I consider to be a negative manner? To say that making an assumption that people who have a strong or sole interest in Jewish articles may be Jewish is somehow anti semitic is ridiculous; if someone made a large proportion of their edits to Serbian articles, English articles or Turkish articles I would make similar assumptions. And yes while it certainly isn't the case that all Serbians dislike Albanians or Turks dislike Kurds I think we would be naive to think in an encyclopedia that is free for anyone to edit that people on opposing sides of conflicts will not try to ensure that "''their side''" of the story is the one that is most prominent. ] 23:25, 24 May 2006 (UTC) | |||
::::Such generalizations as these aren't useful; they are, rather, detrimental. Again, if an individual editor does something inappropriate, deal with that—don't you ''dare'' bring his presumed ethnicity into it! -- ] 23:29, 24 May 2006 (UTC) | |||
:::::First let me say I am absolutely not anti semitic in any way or form- I think this problem is a problem that affects many conflict related articles but Palestinian articles do seem to be some of the worst affected. SCZenz, you seemed to agree with me above that it is not implausible that some Jewish users may not be totally neutral on articles related to the Palestinians or organizations who are seen to support them. In my experience of watching many articles this is quite a widespread problem where some Jewish users will tend to support each other on article disputes or afds/cfds to ensure a certain wording or links are/are not used, whether this is partially deliberate or not I have no way of knowing but the result is that many articles relating to the Palestinians are not neutral at the moment. ] 23:47, 24 May 2006 (UTC) | |||
::::::If you want to say that users A, B, and C tend to push POV X and seem to inppropriately vote to delete articles on topic Y or vote keep on topic Z, that is one thing. The problem is you are making ''being Jewish'' the issue, and not the POV-pushing. I find it perfectly plausible that certain specific users may work together to push POV's on almost any topic, but I do not relate this to their ethnicity no matter what. Does that make sense? -- ] 23:58, 24 May 2006 (UTC) | |||
:::::::Of course many Jewish people are not pro-zionist, it is just my observation that certain users who happen to be jewish or show a strong interest in jewish articles seem to be consistently involved in articles related to Palestinians or people or organizations who are seen to support them in what I consider to be a non neutral manner. ] 00:10, 25 May 2006 (UTC) | |||
::::::::Arnie, you're assuming that just because you disagree with people, it means they are biased. But you have a horribly strong POV when it comes to Israel, and you make no effort to hide it. I've seen no evidence that you're capable of being neutral in that area, which means you're in no position to judge whether anyone else is. ] <sup><font color="Purple">]</font></sup> 12:10, 25 May 2006 (UTC) | |||
:::::::::SlimV where exactly have I ever stated my point of view on Israel or Palestine? The answer is exactly nowhere. I am attempting to be neutral whereas other users are not. ] 15:37, 25 May 2006 (UTC) | |||
::::::::::As you know, I've asked you many times to stop representing your strong POV as the facts or as NPOV, which is what you often do. ] <sup><font color="Purple">]</font></sup> 20:05, 25 May 2006 (UTC) | |||
:::::::::::Here's a saying that much of the Israeli and pro-Israeli media "deliberated distorted" the truth about Rachel Corrie. "Deliberated distorted" is a very strong comment to make about independent journalists, and not just in Israel, but all over the world, all working together to lie outright, in your opinion. That is your understanding of "attemping to be neutral." ] <sup><font color="Purple">]</font></sup> 20:26, 25 May 2006 (UTC) | |||
::::::::::::I'm requesting again, strongly, that this discussion not be continued further. See note on your talk pages. -- ] 20:35, 25 May 2006 (UTC) | |||
::::: Arniep, it's easily provable that there is a full spectrum of opinions among each ethnic group and to bring one's ethnicity into a dispute, especially as a group, is ''never'' warranted and is highly uncivil. I feel that this is the worst ad hom offense, whether intentional or not. | |||
::::: Even if we accept the idea that most ethnic X are somehow "that way", what do you propose we do about it: talk behind their back (as you did), introduce quotas, segregate them to their little wiki-stans or maybe ban them? | |||
::::: Finally, your political soapboxing: it's a huge mistake to equate being ''pro-Israel'' with ''anti-Palestinian'' and vice versa. This goes against the very idea of peace. To your ''"many articles relating to the Palestinians are not neutral at the moment"'' - I can respond the same about many articles related to Israel, Zionism and Jews. Let's get to work on fixing that - without alienating anyone because of their ethnicity. BTW, in my recent admin nomination, many users whom I considered pro-Palestinian voted for me and I am very proud of it. Everyone should be judged by what they do, not who they are. ←] <sup>]</sup> 00:22, 25 May 2006 (UTC) | |||
:::::: Hehe, Humus sapiens, your statement is so clear I feel like making jokes about it, good job in stating the situation clearly. ] 00:16, 25 May 2006 (UTC) | |||
::::::If I implied that ''all'' jewish users contribute to Palestinian articles in a way that overtly puts Israel and it's supporters in a good light and Palestinians and their supporters in a bad light then I apologize for giving the wrong impression. It is just the people who I consider are putting an anti Palestinian bias on articles on Palestinians or people or organizations that are seen to support Palestinians either identify as Jewish or their main interest seems to be in editing Jewish related articles. ] 00:29, 25 May 2006 (UTC) | |||
::::::: Arniep, it was ''you'' who identified "the group of 6 Jews" and selected them for undue criticism. ←] <sup>]</sup> 00:39, 25 May 2006 (UTC) | |||
::::::::Well users who happen to be Jewish often do seem to be the leading critics of the Palestinians or any person or organization that supports Palestinians. As far as I can see there was no reason why the information in that article should have been deleted as the person was involved in a notable incident while working for a notable organization and the tirade of delete votes seemed to be a whitewash. ] 01:36, 25 May 2006 (UTC) | |||
:::::::::'''Wait, wait, wait'''. What does the article being deleted have to do with anything? Or is that the real beef here? -- ] 02:18, 25 May 2006 (UTC) | |||
::::::::::Not at all. I don't think the article should have been kept as he isn't really a notable individual in his own right but I could see no fathomable reason why the information should not be included in the ] article which would warrant a merge. ] 15:14, 25 May 2006 (UTC) | |||
:::::::Fine. I have no doubt that there are many people out there whose motives are less than pure. However, perhaps in the future, when picking examples of that kind of behavior, you might actually confirm that what you've picked ''is'' an undisputable example of that kind of behavior instead of making unfounded accusations behind someone's back (which is something you might want to consider apologizing for sometime soon, by the way, because I think it's a pretty shitty thing to do). -- ] 00:45, 25 May 2006 (UTC) | |||
::::::::Actually Humus said he intended it as a (sick) joke. I would prefer that people keep their sick/bigoted jokes out of Misplaced Pages. Thanks ] 01:36, 25 May 2006 (UTC) | |||
:::::::::Besides the fact that there really wasn't anything bigoted abut it, I really don't think your in any place to accuse others of prejudice.- ] | ] 01:55, 25 May 2006 (UTC) | |||
:::::::::On my talk page, you claim that HS "admitted he intended it as a sick bigoted joke". Considering that his exact words were "of course it was a joke, perhaps tasteless", I would have to say that's a rather revisionist view of very recent history. What, so now he's a ''bigot''? Just out of curiosity, what exactly does that make you? | |||
:::::::::You know, when most people find that their problem is that they're stuck in a hole, they have the sense to stop digging... | |||
:::::::::(On an unrelated note, I indented Moshe Constantine Hassan Al-Silverburg's comment to make the conversation a little easier to follow -- hope nobody minds.) -- ] 02:08, 25 May 2006 (UTC) | |||
::::::::::Well anyone that would find a joke about someone being killed funny in any way would by my and many other peoples definitions be a bigot. Remember people used to make "jokes" about hanging African Americans. ] 15:37, 25 May 2006 (UTC) | |||
:::::::::::Yes, but as has already been stated, this was not a joke about killing a person. Furthermore, even if it ''had'' been one, that doesn't mean that it would have been bigotry any more than making a blonde joke or a fat joke or a white joke or a black joke or a lawyer joke or a Jew joke. It is entirely possible to make and enjoy stupid jokes without being a bigot, and there is such a thing as dark humor. Practicing it may be considered tasteless, sure, but it is most certainly ''not'' an automatic indication of bigotry. When you equate HS's joke with "jokes" about hanging black people, you are effectively (and, I believe, quite intentionally) suggesting that he was not, in fact, kidding -- that he really wanted Phil Reiss to die. Your rhetoric is becoming increasingly tiresome, and I find it more and more difficult to assume good faith on your part when you repeatedly refuse to apologize for talking about someone behind his back, baselessly accuse others of bigotry and effectively equate them with Klan members, and still haven't explained why and how you identified five editors as Jews (just like you declined to answer my repeated question, "what would you like to do about it?" earlier in the discussion). All this leads me to believe that your agenda here is not an objective attempt to improve Misplaced Pages or to fix an actual problem, but to stand on a soapbox. -- ] 19:20, 25 May 2006 (UTC) | |||
::::::::::::I am very strongly requesting this discussion be ended. See my comment at the bottom of this section, or your talk page. -- ] 19:29, 25 May 2006 (UTC) | |||
:::::::::::::Yeah, I didn't notice your comment at the bottom until well after I'd posted the above -- really wasn't my intention to beat the dead horse after a request to stop. Sorry about that, walkin' away now. -- ] 20:10, 25 May 2006 (UTC) | |||
Arniep, let's recall that some of the most outspoken critics of Israel and some of the leading figures in the Palestinian rights movement are ethnic Jews. See ], ], ], ], etc. Like them or hate them, your conspiracy theory (let's call it what it is) doesn't hold true. For the future, please remember that it is wrong to mix ethnicity with political views. Considering that NPOV policy was not killed (pun intended) last time I checked, you are doubly wrong. This is my last post on the subject. ←] <sup>]</sup> 03:34, 25 May 2006 (UTC) | |||
:No where did I state that there weren't Jewish critics of Israel. It is just that I have noticed that many of the people who edit articles on Palestinians or any person or organization that have shown support for the Palestinians are consistently edited in a non neutral manner by users who happen to be Jewish. I point out the same if many Serbs were editing Albanian articles or Turks were editing Kurdish articles in a non NPOV manner, and yes there are Serbs who do not hate Albanians and Turks who do not hate Kurds. ] 15:37, 25 May 2006 (UTC) | |||
:Damnit Humus, how could you be so tasteless! We don't "kill" NPOV policy! This clearly shows rascism towards Npov's, they are people too. I will not stand for this indignity. Let us forget that the original writer of this section as well as 6 of the editors who supported him are Atlantians, I am tired of the entire Atlantian race attacking the innocent Npov people. I propose that we block all Atlantians and prevent them from ever editing again in the future.- ] | ] 04:56, 25 May 2006 (UTC) | |||
::I'm interested to know which of the six voters Arnie believes are Jews, and why, because it seems to me that Arnie, who has a strong anti-Israel POV, is identifying as Jewish anyone who frequently disagrees with him in that area. Having made that identification, he concludes that Jews are always likely to reason in a certain way, without realizing that his conclusion is based on a ] of his own making. The only way to avoid getting tied up in knots like this is to take SCZenz's approach, ''viz''. don't be prejudiced; judge people as individuals. ] <sup><font color="Purple">]</font></sup> 12:06, 25 May 2006 (UTC) | |||
:::SV, Lulu is Jewish but he has expressed opposition to Israeli policies. I have never once expressed "a strong anti-Israel POV" as you stated. I am merely pointing out that many Jewish users or users who have a strong interest in Jewish articles often edit Palestinian related articles in what seems to be a non ] manner. It is not "evil" to assume that someone may be Jewish from their edits, if a person heavily edited Turkish articles and continually expressed a negative attitude towards Kurdish articles I don't think it would be unreasonable for me to assume that that user may be Turkish. ] 15:37, 25 May 2006 (UTC) | |||
:::::Who are the six? ] <sup><font color="Purple">]</font></sup> 20:27, 25 May 2006 (UTC) | |||
:::People are individuals, but when they align themselves with a group, ] can come into effect. I think that bias has no place in any article, and if someone thinks the article is biased, whether the group in question is Jewish or not, then that person has rights too, and not only the group. As for being careful when Jewish people are involved, we should be careful with anyone, Jewish or not. In this discussion, we are discussing Jewish groupthink. Jewish people are no better or no worse than anyone else, and should not expect any special treatment, again either good or bad. Nor should any other group, be they Christian, Buddist, Muslim, Shinto, Hindu, or any one else for that matter expect special treatment because of membership of a group. Is would be wise to get an outside opinion (ruling) of what is the real neutral position, if particpants in a discussion are deadlocked, and want a resolution. ] 15:04, 25 May 2006 (UTC) | |||
::::No one has said Jews are better or worse than other groups. You're introducing a red herring. ] <sup><font color="Purple">]</font></sup> 15:36, 25 May 2006 (UTC) | |||
:::::I didn't intend to. I believe my input is relevent. ] 16:15, 25 May 2006 (UTC) | |||
::::::I think what Wallie may be saying is what I tried to say earlier, that is, as this is an encyclopedia that can be edited by anyone in the world we would be naive in thinking that in relation to articles that are connected to conflicts it may not be the case that people on one side of a conflict may attempt to push a particular article version so that it fits what they consider to be neutral, which may not be regarded as neutral by people who have no real opinion on a conflict or are on the opposing side. We saw this with an article on gay rights in relation to Catholic users and Kurdish articles and categories in relation to Turkish users- fortunately many users saw what was going on there and managed to balance out the one sided opinion. Unfortunately this sort of balancing doesn't seem to be occuring at the moment in articles related to Palestinians or those seen to support them. ] 16:18, 25 May 2006 (UTC) | |||
:::::::Given your strongly anti-Israel POV and edits, it's not surprising you would say that. More importantly, though, your "guilt by stereotyping" argument is, in fact, the very basis for most racist generalizations. It's astonishing you would continue to try to support this bigotry. In addition, you have failed to note the circularity of your argument; you defined 6 editors as "Jews", and then proceeded to say they were biased because they were Jews. Only one is a member of the Jewish Wikipedians category, so how did you sniff out the other 5? What characteristics of the "Jewish race" did you note which led you to that conclusion? ]<sup><small><font color="DarkGreen">]</font></small></sup> 18:49, 25 May 2006 (UTC) | |||
::::::::I just want to be clear how Jayjg is accusing me of being strongly anti Israel so other people can judge for themselves, firstly I rejected a ] link on the Palestinians article and secondly I rejected the inclusion of right wing descriptions of ] as a collarator of terrorists. Maybe in America that would be seen as a strong anti Israeli POV, in the U.K. it is most definitely not. ] 12:06, 26 May 2006 (UTC) | |||
I do not think this debate should continue; I have warned ] on his ] that these generalizations and assumptions about religion/ethnicity cannot be tolerated (for Jews, Serbs, Catholics or anyone else), whether he believes they are justified or not. I gave this warning in my capacity as an administrator, and further issues are more properly a topic for ] -- ] 19:13, 25 May 2006 (UTC) | |||
:I have never ever expressed "strongly anti-Israel POV and edits" as Jayjg and SlimVirgin claimed- I would encourage them to post diffs to back up their assertions (there aren't any to find BTW). This is clearly just a way of encouraging editors to turn away and to portray my claims about Palestinian article neutrality are just the rantings of a bigot. I am not a bigot and this is exactly what I am trying to fight here. ] 19:29, 25 May 2006 (UTC) | |||
::You have said things that are clearly inappropriate. This has been explained to you, clearly, on your talk page. I say again that everyone should just walk away from this discussion. Nobody has to be labelled as a "bigot," but ethnic/religions generalizations are beyond the pale and I don't think it is productive to argue about them anymore. -- ] 19:31, 25 May 2006 (UTC) | |||
:::Hi Arniep. I think it is better to concentrate on writing stuff in articles rather than trying to discuss things with people who probably have a different POV to you anyway. There are millions of people out there who read Misplaced Pages to get information. If you put in something that is true and well researched, it will speak for itself. As the old saying goes, "the truth has a certain ring to it". Also, it is sometimes a good idea to work on some completely unrelated topic. I know. I got into some strife with someone, and quarantined myself from the problem area for a month. But I will be back, stronger than ever and refreshed. Most importantly, ignore incoming name calling. It is always the name caller at fault, not the callee. There is all to much of this in Misplaced Pages. It is almost a culture. However, individuals to improve over time. I suppose they grow out of it. ] 20:15, 25 May 2006 (UTC)<br> | |||
If a large number of commentors/voters belong to a small special interest group it is entirely proper to point that out. If these biases are not identified and discounted, Misplaced Pages is just a plaything of special interest groups. It is the consensus among global Wikipedians that matter. A "consensus" created by partisans is not a consensus at all and had no validity, but I don't think this point is observed nearly often enough. ] 13:38, 26 May 2006 (UTC) | |||
== Category usage from user space == | |||
I know we do not allow redirects to user space, but some users seem to be getting around this by using categories. For an example see ]. Is this an accepted policy? ] 23:42, 24 May 2006 (UTC) | |||
:See ]: "If you copy an article to your user namespace (for example, as a temporary draft or in response to an edit war) you should decategorize it." I usually suggest a change like the one at ], as often the addition of a category is an inadvertent mistake after moving a draft into the Userspace. ] 23:46, 24 May 2006 (UTC) | |||
::An alternative is to recategorise it in a "WikiProject" category. This can help other people find it (unless of course you don't want people to read it until it is ready...). If you do want others to comment on things in progress, categorising is one possibility. For example ] has two User subpages linked from it. If no categories have been created for the WikiProject, or there is no suitable WikiProject to leave a link on, it is still possible for people to find these drafts by clicking "What links here" on articles wikilinked from the draft. Currently, I know of no way to prevent these "draft" pages in User space from showing up on the "What links here" pages. Should the delinking of such drafts be encouraged by enclosing them in <nowiki> "<nowiki> DRAFT </nowiki>" </nowiki> tags? ] 10:30, 25 May 2006 (UTC) | |||
:::Either the nowiki tags, or simply adding a semicolon right before the "C" in "category" (<nowiki>]</nowiki>) will work to deactivate the categories. I do this as a matter of course to user subpages if I see them in articlespace categories; just make sure you leave an edit summary when you do so, or else someone might be wondering why you're mucking about in someone else's user space. ] 19:42, 25 May 2006 (UTC) | |||
::::Right. I know that method of linking to a category, but I was talking about putting the nowiki tags around the whole article to prevent the User subpage showing up in "What links here". Let's find an obscure page, such as <nowiki>]</nowiki>, and see if this works. ] 20:45, 25 May 2006 (UTC) | |||
:::::It will work, but is it necessary? User pages already show up in "What links here" and changing every one would involve a massive amount of work. Furthermore, that link is designed to show links between all namespaces; if we want it to work otherwise it would be a better idea just to alter the software not to show userspace link-tos. ] 20:50, 25 May 2006 (UTC) | |||
:::::I for one would find rough draft pages a lot less useful if they couldn't have functioning internal article links. I don't see the "what links here" inclusion as a problem, and certainly not one that would outweigh that functionality. ] 20:52, 25 May 2006 (UTC) | |||
== Too many boilerplates == | |||
Recently, it seems that every other article I find start with a big ugly boilerplate, usually <nowiki>{{cleanup}}, {{cite}} or in case the topic is even slightly controversial, {{NPOV}}</nowiki>. I see this on really good articles, even featured articles. This is becoming like the "under construction" banners that just about every webpage had in the mid-1990:s. | |||
I suggest that biolerplates should only be put at the top of an article if it is in such bad shape that we need to warn the reader right away. For smaller problems how about putting the markers at the bottom of the page, just like stub markers? And for the even smaller issues, put it on the talk page instead of in the article itself. Sure, it would take two clicks to find the article when browsing a cleanup category, but I think that's ok. The current situation makes Misplaced Pages look very unprofessional. --] 04:01, 25 May 2006 (UTC) | |||
*If the article itself looks very unprofessional, as is often the case, then the tag doesn't really hurt anything. While good style is important, we are by our nature not professional, nor are we pretending to be. It's better to be able to suggest to readers that they can help improve the article (or at least let them know that the article is poor and we know about it) than to just have poor content out there as if it's decent. --] 17:21, 25 May 2006 (UTC) | |||
:The problem is that it tends to be the good articles (which are also more popular and more suspect to edit warring and disagreements) that have these tags, when many not so good articles don't have them. Maybe these tags need to be on the talk page rather than the front page? Featured articles shouldn't have these tags, and if they do it ''may'' be someone POV-pushing. ] | |||
This is because editors are working towards the ]. ] 20:12, 29 May 2006 (UTC) | |||
== "What links here" and "Special pages" tools == | |||
Have there been any proposals to have the self-referential tools "What links here" and "Special pages" used in article namespace for the readers of Misplaced Pages? I have seen people linking to Special pages, especially the index prefix one. I haven't seen people linking within an article to a "what links here" page for an article, but I suspect it has been done somewhere. I agree that these are useful tools for the readers, as well as the editors, but in their current incarnation they are rather self-referential. Are the tools advertised to readers as an alternative to the options of: searching, browsing (categories, portals, navigational boxes)? ] 11:08, 25 May 2006 (UTC) | |||
:There should be no links in article space to "What links here" or "Special pages", see ]. -- ] <small>(])</small> 14:32, 25 May 2006 (UTC) | |||
::I agree entirely. Unfortunately, the policy document you pointed me to does not make this clear. "Special pages" is not mentioned at all, though it is implied in the mention of "What links here": ''"do not refer to any link in the sidebar or along the top of the screen"''. But even that is only saying "do not refer to them". It is not saying do not link to them. I think the policy needs rephrasing to make this clear. | |||
::But stepping back from that for a moment, you haven't said why readers shouldn't be pointed at these useful browsing tools? I want to look up the pages that help readers navigate Misplaced Pages, as a mention of these tools would be useful there. Does anyone know where those pages are? ] 17:21, 25 May 2006 (UTC) | |||
I found ] which points readers towards these useful tools. However, they are not really held up as useful '''browsing''' tools, which I think they are. Would anyone object if I wrote something on how to use these tools to browse Misplaced Pages? Even better, does anyone know of something that exists already, written for readers, not editors? ] 17:24, 25 May 2006 (UTC) | |||
:PS. rather than "Special Pages", I meant "All Pages" is a useful browsing tool. ] 17:25, 25 May 2006 (UTC) | |||
::Help about browsing, for readers, is at ] (the ''Help'' link in the navigation frame on the left, then ''Browsing Misplaced Pages''). You (or anyone else) is entirely free to make improvements to the help files (they can be edited, just like any of the articles). So, in a sense, I think readers are already pointed at these helpful tools. The browsing experience is not referred to in the text of articles due to the reasons mentioned at ]. Misplaced Pages's content is available under the ], and might be at a different web site (without the same tools) or might be in printed form. -- ] <small>(])</small> 01:33, 26 May 2006 (UTC) | |||
:::Oh sure, I understand all that. I believe there is also a "self-reference" tag that can be used to mark things like this, but I suspect it is used sparingly, or in slightly different contexts, or maybe its use is not widespread enough yet to have caught on. But in any case, I think those help files are what I was looking for, and I'll try and add something at some point. Thanks. ] 09:45, 26 May 2006 (UTC) | |||
::::PPS. Looking more closely, it was actually ] (different from All Pages in that it doesn't just display a fixed number of pages after a point in the index, but rather it truncates at both beginning and end of a defined set) that I was finding very handy to browse with. ] 09:50, 26 May 2006 (UTC) | |||
== Policy consideration== | |||
] | |||
Signed:] (]) ] 11:20, 25 May 2006 (UTC) | |||
== Fair use images in templates: exceptions == | |||
] -- I'd appreciate any comments on the associated Talk page. Thnx ] 14:29, 25 May 2006 (UTC) | |||
== Removing dictionary definitions is outside process? == | |||
If people are removing dictionary definitions and adding a link to Wiktionary, should the removed content be added to Wiktionary if it is not there? I think so, otherwise you are effectively removing material without going through the transwiking process. I am thinking in particular of disambiguation pages, where it is common to replace dictionary definitions with a link to wikitionary. Unfortunately, the wiktionary entries sometimes lack the definitions that were removed from Misplaced Pages. What should be done in those cases, especially if an editor does not have time to go and add it to Wiktionary themselves? ] 17:30, 25 May 2006 (UTC) | |||
:In my experience, editors who remove them usually mention that they have already transwikied them. If they don't say that, then there might be an issue. ] 00:12, 26 May 2006 (UTC) | |||
::Do you have any examples where you're sure they weren't actually transwikied first? ] 00:13, 26 May 2006 (UTC) | |||
:::Not to hand, but if you pick any ramdom "dictionary" (ie. different meanings of a word) disambiguation page starting from ], and find ones with Wiktionary links, and then look back through the page history to the point where the Wiktionary link was added, I suspect in some cases some dictionary definitions might have been removed without being transwikied. The example I picked almost at random is ] - some of these are dictionary definitions that should be transwikied, though in this case they have been left in Misplaced Pages. | |||
:::But I really just want to clarify the principle here. I've looked at ] (''"Misplaced Pages articles are not lists of such definitions. There are, however, disambiguation pages consisting of pointers to other pages; these are used to clarify differing meanings of a word."''), and ] (''"Dictionary definitions don't belong here."''). These don't seem entirely consistent, with the one implying that dictionary definitions of a sort ("different meanings") exist on disambiguation pages, and the other saying that dictionary definitions don't belong here. I would suggest clarifying the former to read: ''"Misplaced Pages articles are not lists of such definitions. There are, however, disambiguation pages consisting of pointers to other pages; these disambiguation pages are not intended to be lists of definitions, but are used to distinguish between articles where the words used in a title can refer to several different meanings."'' | |||
:::What I really want though, is a clear statement that dictionary definitions should be transwikied, rather than just removed. Where would I find this? ] 10:12, 26 May 2006 (UTC) | |||
::::There've been several times in the past where I've come across editors removing definitions from disambiguation pages citing ] -- however they had not added the removed definition to Wiktionary -- sometimes the corresponding entry in Wiktionary didn't even exist. In at least one case, where I objected to this, the editor in effect said the content of Wiktionary was not his concern--he was merely enforcing his interpretation of the "dictionary definitions don't belong here" dictum. This seemed to me a rather unhelpful attitude. IMO, a brief (and I emphasize brief) definition on a disambiguation page is often helpful. I completely support the idea that disambiguation pages should not be dictionary entries, explicating multiple fine gradations of meaning and usage, variations in pronunciation or spelling, different word forms, etymology, etc. However, I think it is counter-intuitive to remove brief definitions from a disambiguation page simply because there is no corresponding article. Many editors will link to a term for the dict-def (whether that is a good practice or not is another matter, but it happens and will more than likely continue to happen)--I don't see why the disambiguation shouldn't page provide a brief definition for those misguided links. Of course there should also be a link to Wiktionary, for those who want more detail, but in many cases a simple short definition is sufficient. ] ≠ ] 12:40, 26 May 2006 (UTC) | |||
:::::OK. Thanks. I've started a proposal to change the wordings in policies and guidelines ] and ]. ] 13:03, 26 May 2006 (UTC) | |||
== Institutions are notable? == | |||
I have been thinking about this a while. I can't decide whether an article about a hospital is ''per se'' notable. Example is ] although there are many many more out there. Should these be included? Should these be deleted? I'm new here and I really don't know, and can't seem to find any precedent. Sometimes, what looks like (to me) non-notable stuff has an article, sometimes I see them being prod-ed. Is this the right place to discuss this? If so, what are your opinions? If not, where should I move this to? :) Thanks! ] 17:47, 26 May 2006 (UTC) | |||
:] is probably a good place to crosspost it.--] <sup><font color="green">]</font></sup> 01:03, 28 May 2006 (UTC) | |||
== Original research in "Further Reading" == | |||
In the article on the ] an editor included a section "Further Reading" in which he mentions (as only entry) an earlier work by the author of the described plan. | |||
The "70 steps plan" (a plan for the 'solution of the foreigner problem' in Belgium) dates from 1992, the work mentioned in "Further Reading" dates from 1991. | |||
The work in question doesn't provide further reading ''about'' the "70 steps plan", but is a tract on the same subject ''as'' the plan. | |||
What is the general feeling about this ? In my view, the "further reading" section should contain textbook ''about'' an article's subject. | |||
Thanks for your input.--] 20:56, 26 May 2006 (UTC) | |||
:It would certainly be appropriate in an article about the author. I think it's okay to have biased or primary sources listed in references, as long as they're relevant and informative. This case seems fuzzy to me though. ] 21:27, 26 May 2006 (UTC) | |||
::the item in question seems to be a primary source. Those I think should be listed. Do they help readers? -- yes. ] 21:31, 26 May 2006 (UTC) | |||
==New Idea for Page Protection== | |||
<!-- 00:10, 27 May 2006 (UTC) --> | |||
Please see ] for a new idea on combatting vandalism and protecting pages. | |||
== ] proposal == | |||
The articles about missiles and unguided rockets badly need a naming convention, especially the Russian ones. So I'm posting it here as requested on ]. - ] 14:51, 27 May 2006 (UTC) | |||
== Admin power == | |||
Background: | |||
If an admin just "doesn't like your face", and finds an excuse, and bans you forever. Note there are many many rules, and just by writing any text in an article it could be considered POV, or trolling etc, and these are just the simple rules. | |||
Can you: | |||
# Appeal this? | |||
# Get help? | |||
# Do you have to apply to the person who banned you? | |||
# If you do seek help publicly, and the admin finds out, can the admin ban you for this? | |||
# If you are banned, how can you contact anyone anyway? | |||
This is not targeted towards anyone. It is just what I interpret the policy could allow. ] 15:48, 27 May 2006 (UTC) | |||
:] has most of the details, but in summary, yes, you can appeal and get help. Checking with the person who banned you should always be the first option, in case they blocked you by mistake. Contacting another admin first won't change this, as an admin would not unblock you without checking first with the blocking admin. You can contact the blocking admin when banned by email, or by using the {{tl|unblock}} template on your talk page. The important thing to remember, that if blocked, whether by accident, autoblock, unfairly, or justified, is to be civil. Mistakes happen, and a blocked user will get a much better outcome with politeness, unless of course the indefinite block was 100% justified, in which case it's probably a little late to be polite. For other blocks, being polite will help, no one likes being shouted at if a mistake was made, or if the block was unfair being polite will make that clearer, or if it was justified, accepting it calmly will reduce the chance of it happening again. Regards, ] 16:26, 27 May 2006 (UTC) | |||
== How to handle commercial co-option of terms == | |||
Just curious what the policy is on this... the article on "]s" was brought to my attention by a merge proposal, and the article as it stood was about "Baby Carrots", not about baby carrots (hope you get my meaning). The original writer of the article has been quite understanding and no edit war arose (we're working together on a better version), but I'm just curious how such things should be handled in less friendly situations. At least a partial explanation is on the ] for the ] (we were back and forth on user talks for a while earlier), but my more general question is about how to write a good article that points out the differences between traditional and commercial uses without getting soapboxish about the virtues of one meaning or the other. ] 21:14, 27 May 2006 (UTC) | |||
: If the commercial use (say a brandname) is sufficiently notable then it deserves its own page (say "Baby Carrots (brand)"). If it's somewhat notable, redirect to the generic term and add a brief mention there. If it's not notable, it doesn't deserve a mention anywhere. ] 21:16, 27 May 2006 (UTC) | |||
::In this particular case it's not a brand name, but a marketing term for carrot-shaped carrot pieces, as opposed to the more traditional use of the term referring to young carrots. I wish I could think of some other examples of this. ] 15:30, 28 May 2006 (UTC) | |||
== Discussion of CSD T1 and T2 == | |||
Since discussion of userboxes and their speedy deletion was overwhelming ], some of us decided to try to organize the enormous, sprawling, and often repetitive debate over the criteria T1 and T2, as applied to userboxes. Thus, we have a new page: ], and all of the discussion of those criteria that was at CSD talk is now at ]. | |||
At the main page, we're trying to somehow organize and fairly and neutrally present all of the relevant arguments regarding userboxes, their recent deletions, and their eventual fate. Editors are very welcome to help out with this project, which I hope may point the way towards a sensible resolution of the current drama and consternation. -]<sup>(])</sup> 08:51, 28 May 2006 (UTC) | |||
== Blanking own talk page == | |||
I have heard conflicting statements on whether a user may blank (''not'' archive) their talk page. I am unable to find official policy one way or another. Some admins claim that the user's talk page is an official record of other people's interaction with you while other admins claim that you have as much right to blank the page as others do to place comments there in the first place. ] comes in to play here and I tend to fall on the side of the no-blanking people. However, I am at a loss as to what the official policy is. Can someone point me in the right direction with an unambiguous determination? While this is most definitely not an academic question, I will never blank my own talk page. --] 16:49, 28 May 2006 (UTC) | |||
: I'd like to know, too. I've encountered at least one "trouble maker" who blanked his page "just cuz" and once in awhile I'll also encounter a talk page blanking that's been done by someone who has chosen to resign from Misplaced Pages. When I was a "clueless newbie" 2 years ago I remember blanking my talk page because at the time I was unaware of how archives worked. ] 16:53, 28 May 2006 (UTC) | |||
*Shrug. I blank mine regularly, politely leaving a note to how to see what was blanked so readers don't have to look through the history to see when I blanked it. The issue is with blanking active warning messages. If, for example, you've done something I find warning-worthy, and I leave a warning on your page, it's considered improper to remove the warning until at least a decent period of time has elapsed or the issue has been resolved. I think that's so that if another editor notices you doing the same thing, they won't also bother you, and if another editor notices you doing it a second time, they'll know you've already been warned about it. Blanking isn't any different from archiving, other than the method of retrieving older versions. --]] 16:56, 28 May 2006 (UTC) | |||
:It's conditional. ] has several examples of this, and blanking the page is ''archiving to history''. This is generally acceptable, providing that the blanking is not removal of recent warning messages, e.g. {{t1|npa}} or {{t1|test3}}. My rule of thumb is that if it is being blanked to hide '''recent''' problems it's not ok, otherwise it is fine. In keeping with this I also support ] rule of not deleting the page administratively. — ] <sup>]</sup> 16:58, 28 May 2006 (UTC) | |||
I should point out that it is indeed blanking of ''current'' warnings that I am most concerned about. Removing warnings that are a month or three old is probably a different matter. But what about a user who is removing ''current'' warnings? We seem to think this is not acceptable? --] 17:01, 28 May 2006 (UTC) | |||
*Yeah. It's bad form, I think, unless the issue was resolved. So in my example above, if it turns out that my warning was ill-founded -- for example, what I might have thought was a 3RR issue was actually you resisting vandalism -- then you'd be justified in removing the warning. But the general case is they should stay at least for a vaguely defined little while. --]] 17:04, 28 May 2006 (UTC) | |||
:But what constitutes ''current''? That day? That week? In my opinion, removing warnings, even a month or so old makes it more difficult for another user to know if this is a habitual vandal or not. ] 17:06, 28 May 2006 (UTC) | |||
::Yeah, that's where the "judgement" thing comes in. --]] 17:18, 28 May 2006 (UTC) | |||
And then there's the type of user who blanks out comments in their talk page from people they dislike, sometimes with snarky edit comments like "Deleted unread... I *told* you not to post to my talk page!" This seems akin to the childish practice of sticking one's fingers in one's ears and saying "La La La I Can't Hear You!". ] 14:42, 29 May 2006 (UTC) | |||
:::I don't think so. Nobody's required to converse with people they loathe; were we to prohibit or even discourage this sort of blanking, we'd be giving carte blanche to harassment ("I can say whatever I wan't and you can't stop me, nyah nyah nyah"). --]] 14:45, 29 May 2006 (UTC) | |||
== Third Person Policy == | |||
Is there a policy in which an article must be written in third person? If not, I suggest it should be made. --''']]'']'']]]]''' 20:35, 28 May 2006 (UTC) | |||
*I don't think there's a policy, but it's common sense given our goals. Have you spotted articles written in first or second person? --] 22:02, 28 May 2006 (UTC) | |||
*Some "how to" articles are written in second person. But those shouldn't exist in the first place. Sometimes people mistakenly use first person but get corrected. ] 22:24, 28 May 2006 (UTC) | |||
I've certainly spotted some parts of articles written in the second person. Usually instantly deletable, borderline illiterate rubbish. -- ] 23:08, 28 May 2006 (UTC) | |||
*The how-to articles should be transwiki'd elsewhere quickly, and other instances of 1st/2nd person are generally part of poor edits that can be corrected or expunged. I don't think we really need a policy for this when it's almost always a sign that some content needs attention, to one end or another. --] 14:17, 29 May 2006 (UTC) | |||
== ] == | |||
This is a policy I have proposed that will make it easier to distinguish the point in an article's edit history when it receives/looses featured status from the current revision of the article.--] <sup>]</sup> 01:22, 29 May 2006 (UTC) | |||
== set minimum age limits == | |||
Some users are so young they have not been around long enough to be able to accurately perceive the short or long term consequences of their actions. Therefore I think a minimum age limit needs to be set for users and sysops and bureaucrats, etc. so we do not have to waste so much time having to educate them on our positions that have years and years of experience behind them like the portion of an iceberg below the surface. <small> ...] (])</small> 05:32, 29 May 2006 (UTC) | |||
:Um, no. "Not been around long enough to be accurately perceieve the short of long term consequences of their actions"? That's a massive and absurd overgeneralisation.--] 06:48, 29 May 2006 (UTC) | |||
::Not that massive and not that absurd, I should think. --] 07:15, 29 May 2006 (UTC) | |||
I can think of one user who frankly we would be better off without. He's in elementary school and pretty much incapable of any real contributions (his grammar, spelling, et al are on an elementary level, understandably). He does a lot of goofing off and needs a bit of babysitting ... literally, babysitting. He's not violating any particular policies but an age limit would help to remove kids like him. --] 06:51, 29 May 2006 (UTC) | |||
:And I'm sure one can come up with a long list of people of advanced age who have also shown themselves to be unable or unwilling to make productive contributions here. What would be the point? ] 14:49, 29 May 2006 (UTC) | |||
:The most exceptionally annoying and time-wasting contributors I can think of offhand have been middle-aged... ] | ] | 15:32, 29 May 2006 (UTC) | |||
How would you verify a person's age? ] 07:22, 29 May 2006 (UTC) | |||
We've had respected members of the community at least as young as 13, so any bar would seem to need to be lower than that, at which point there is almost no one left to exclude. Which is not to say that I would want to anyway. I welcome any child who can make productive contributions. ] 07:55, 29 May 2006 (UTC) | |||
I would go so far as to say that it is one of the wonders of the modern Internet that people so young that you'd never trust them face-to-face can be judged on their merits and actions alone. We are enabling these people to take responsibility for something that matters. If a minimum age limit were set for Misplaced Pages, I would quit the project in protest. Besides, it's technically infeasible. ] 08:02, 29 May 2006 (UTC) | |||
:Definitely a non-starter for users. Firstly, it's absolutely unenforceable. Secondly, each editor and indeed each contribution can be assessed on his/her/its merits. I've seen rubbish from people who seem to be mature adults. Admins and bureaucrats are different, but it would require careful identity checks to enforce, and again being 18 or 21 is no proof that you are mature and sensible.--] 08:31, 29 May 2006 (UTC) | |||
::Strong agree. We've had very young respected admins (I've actually been surprised when I found out their age), and we've had fully grown and elderly <s>assholes</s>negatively productive users. Moreover, if we want to collect all the world's knowledge, a kid's perspective can be rather useful, I think. --] 08:39, 29 May 2006 (UTC) | |||
:::Not that this should have any weight relevant to our goals on Misplaced Pages, but allowing youth to take responsibility and contribute to projects in society gives them a stake in society and helps build good character and habits to support it. Rather than hiding from the destructive actions of a portion of irresponsible children, we should engage as many as we can in our project, because in the end it will do both our project and those involved a lot of good. If we aim to help society with our project, the more people that have a stake in our project, the better off we are, even if we have to deal with some roughness along the way. --] 14:22, 29 May 2006 (UTC) | |||
Dear people from Misplaced Pages, | |||
I wrote an e-book about the War in Yogoslavia and some of the historical facts exposed in the book ( may be visited at www.peev.org ) I wanted to share with you. I started the edition on the Bosnian page, and I gave the historical facts about the Vatican implications at the Balkans (also to see im my e-book), but there is a 16-years old boy who accused me for vandalization ?! Please, I want that my message be re-examend by the people who knows and loves history and not by those who have a hobbis like this 16-years old boy. And I want to hear excuse, because I wrote you with all my respect and love. I just wanted to share the knowledge with you...nothing else.. | |||
Dr. med. Jasmina Peev <small>—The preceding ] comment was added by ] (] • ]) {{{2|}}}.</small> | |||
:You raise various issues here, but I'll stick to the accusation of vandalism. A 16 year old or a hobbyist is no more or less eligible to accuse you of vandalism (a serious charge, of course) than is a 56 year old or a historian. If you specify the diff (or at least the article) in question, then somebody here will take a look at the accusation and judge it on its merits (if any). -- ] 14:38, 29 May 2006 (UTC) | |||
:It appears ] is peeved that was reverted, and then a (self identified) 16 year old left a message on the user's talk page properly warning them to stop vandalizing, and gave some constructive advice on how to participate. That probably shows the value of having a 16 year old. --] 14:40, 29 May 2006 (UTC) | |||
:A 16-year-old who cleans up vandalism is certainly more useful to this project than an older person who creates it. ] 14:49, 29 May 2006 (UTC) | |||
*There are going to be no age limits on Misplaced Pages. --] 15:01, 29 May 2006 (UTC) | |||
:I would have been an amazing contributor if Misplaced Pages had existed when I was a kid =). I oppose any general criterion to exclude editors; it should always be done on a case-by-case basis. ] 23:36, 29 May 2006 (UTC) | |||
== Semi-protection policy for featured articles == | |||
I suggest that Misplaced Pages put all featured articles under the semi-protection policy for their stay on the main page. On May 28th, 2006, anonymous user ] vandalized the ] article multiple times, deleting large portions of it and replacing them with nonsense. His or her edits were later reverted; nevertheless, several people saw the article with his or her edits in place, and putting featured articles under the semi-protection policy for their 24-hour exposition on the main page could prevent another similar incident. ] 14:47, 29 May 2006 (UTC) | |||
:Please see ], a statement about this from the ]s editor. -- ] <small>(])</small> 17:38, 29 May 2006 (UTC) | |||
== Scientific point of View == | |||
Because wikipedia is an encyclopedia, and strives to be as accurate as possible, shouldn't we have a scientific point of view, rather than a neutral point of view? | |||
It becomes very difficult to edit articles on pseudoscience, such as ], when describing the scientific point of view is treated as a form of bias. For example, there is strong disagreement on mentioning the mainstream scientific view in the introduction to astrology, as it is might be considered as bias. I think implementing a scientific point of view, will go a long way in improving the quality of articles on wikipedia. At the least, we could amend the NPOV, so that more emphasis is provided to the mainstream scientific view, when disputes arise. ] 15:09, 29 May 2006 (UTC) | |||
:But astrology isn't a scientific subject! (I wouldn't even call it pseudo-science). Any scientific view inevitably will criticize it, and be irrelevant. Don't pretend that Astrology even merits scientific arguments - it doesn't (personally I think its a load of rubbish, but that's neither here nor there). But it is notable, and it is respected by a large number of people. I think the current is balanced and makes it clear tha Astrology is a system of belief, nor does it make unilateral claims. The section "The objective validity of astrology" also deals with criticisms - I can't see a problem, it is a balanced article which seeks to explain a system of belief that is widely known and very popular (even though it is unscientific). I think a WikiScience wouldn't be a bad idea, but that's a separate issue (WikiScience if it existed wouldn't even have articles like astrology on it). ] 20:06, 29 May 2006 (UTC) | |||
:: In my opinion, the point that it is notable, and is respected by a large number of people, shows the need for a scientific point of view. This is because, there is overwhelming evidenince, via controlled experimental studies, that astrology cannot make consistent accurate prediction. So any such statement, which says astrology fails to make accurate predictions, is '''essential''' in an introduction to the subject, as it can be verified by facts. So considering what many people say today, isn't it essential to include the fact that astrology doesn't work in controlled experiments in the introduction, to give a balanced view? ] 04:20, 30 May 2006 (UTC) | |||
:What is a scientific point of view? See also ]. ] 20:18, 29 May 2006 (UTC) | |||
:: The scientific point of view is the view of the mainstream ] ] 04:20, 30 May 2006 (UTC) | |||
I just had a look at the astrology page. The introduction and the few other sections I looked at were all excellent. I don't think it's a good idea to slant the NPOV toward the consensus of mainstream science. Doing so, after all, wouldn't be NPOV any more as it presumes the superiority of scientific knowledge over other types. Besides, mainstream science has at many times throughout history endorsed utter gibberish. We shouldn't feel immune to this possibility in our era any more than Newton should have in his. ] 20:20, 29 May 2006 (UTC) | |||
:True science, as an ideal, is almost by definition NPOV. ] 23:39, 29 May 2006 (UTC) | |||
] 04:20, 30 May 2006 (UTC) The point is, IMO, the scientific method '''is''' superior because it works. The '''only''' way to test a hypotheisis is by experiment, and this will straight away show what is right and what is wrong. So, as far as we are concerned with the accuracy of something, science is what we turn to. | |||
: Also, can you please give examples of when mainstream science has supported absolute gibberish? ] 04:24, 30 May 2006 (UTC) | |||
::Well science is basically consensus building. It really cannot claim a position 'better' than just common knowledge. ] 23:58, 29 May 2006 (UTC) | |||
::: Refer above. ] 04:20, 30 May 2006 (UTC) | |||
:The problem with an SPOV is that "science" really isn't sufficiently well-defined to give us a good guideline. If you think articles should be written from the point of view of the majority of people with PhDs in the subject, well, that's ] and is completely unacceptable—consider ], say, or ], both of which have many doctorate-holders who assert the validity of their subjects, while doctors from related subjects (medicine) believe them to be garbage. Conventional acceptance isn't a criterion for scientific rigor.<p>Just look at ]. Can you really objectively say whether something is parsimonious, pertinent, etc.? I doubt it. It's perfectly sufficient to say "nearly all scientists believe this idea is total garbage"; that gets the point across. —] (] • ]) 03:22, 30 May 2006 (UTC) | |||
:: I say articles need not be written by PHDs, but that importance should be given to the views of the mainstream scientific community and importantly, the experiments performed to back the claims of the scientific community. Again I repeat, as an encyclopedia, I think accuracy is vital when dealing with Scientific articles ] 04:20, 30 May 2006 (UTC) | |||
== Mass deletion of all ethnic categories == | |||
#Where is the policy that governs the mass deletion of a category and all its subcategories? This is being proposed in ]. | |||
#The proponents offer no policy reasons for the deletions, they just don't like the idea of such (ethnic) categories. | |||
#The proponents have not listed all the subcategories to be deleted, so the reader is left guessing what is really involved here. | |||
#The proponents have only provided notice of the proposed deletion in the pages having the topmost supercategory (a notice that has changed from 'deletion' to 'mass deletion' about 6 days after the original notice); no notice has been added to any or all the sub-categories involved. | |||
#Even if such notices were to be provided, no notice has been provided to the article editors who made use of the categories--so the primary interested parties, who probably do not monitor the 'categories for deletion' page, are given no notice of the proposed change. | |||
#While I do (corrected: 'not') agree with the proposal, I am more alarmed by the method or lack of method used in this mass category deletion process. | |||
Thanks ] 15:53, 29 May 2006 (UTC) | |||
(numbered) | |||
The 7-day debate was completed a couple of days ago. | |||
:#This is an "umbrella" proposal. It will result in new guidelines. | |||
:#Actually, {{User|Mayumashu}} had quite a long description, and there have been dozens of individual CfD debates in recent months. These categories have been problematic. | |||
:#Not much in the way of guessing, as the whole point of categories is browsing. The roots of subtrees are listed. | |||
:#It is true that Mayumashu used the wrong template {{tl|cfd}}, but I fixed the {{tl|cfdu}} (not 6) after the original notice. Adding notices to all the subcategories isn't required. However, I did add pointers to the CfD discussion in about 13 other places. That's actually quite a few by historical standards! | |||
:#Prescience is not our strong point.... Hopefully, such editors are aware of the ongoing edit wars about ethnicity categorization. Those that are not as active are allowing the wisdom of the regular category maintainers to come to a rational decision. | |||
:#This is false. You actually wrote "'''Keep''' Categories should remain and not be deleted" (on 20 May), and you were the first person to vote. You voted a second time (on 27 May) "Very much opposed." You commented at least 4 times (more than anybody else). You had more notice than most of the rest of us that only review such things sporadically. | |||
:These categories have proven extremely problematic and combative, with various factions adding or deleting people based on speculative surnames, speculative geneology, and unverified history. Lists with verifiable references are much more useful where ethnicity is notable. | |||
::--] 18:48, 29 May 2006 (UTC) | |||
:: Thanks for the response--but you have added info here which I did not find and do not believe a reader should be expected to able to infer from the 'deletion proposal', as written. It is a proposal to delete something, not a proposal to revise a policy. And I see generalities in the reply which would take more long discussion to evaluate. | |||
Sorry, I left out the word 'not': I do not agree with the deletion. | |||
I think that categories are useful only if they are supported by content in the articles. If the article says the person has F00 ethnic/national origin, then an ethnic category is correct; if the article is wrong, then it and its categorization can and should be corrected. If we cannot accept the article content as 'fact', why would other references added to lists be accepted as better? | |||
If there is a plan to delete a category and replace it with a list, I think the proponent should be required to first put all the people then currently in the category into the list and not rely on the 'hope' and 'wish' that this will be taken care of later by editors of the list. Again, the bio article editors may know nothing about such lists. | |||
Thanks ] 19:59, 29 May 2006 (UTC) | |||
== what constitutes community consensus to ban? == | |||
According to current ], a user remains community banned when they satisfy: "Some editors are so odious that not one of the 915 administrators on Misplaced Pages would ever want to unblock them." | |||
So according to this sentence, if any single admin is willing to unblock a user, then consensus to block does not exist. This sentence has two problems. Firstly, it discounts entirely the opinions of nonadmins. Why should only an admin's willingness to unblock matter? What if a dozen good-faith editors ask for a user to be unblocked, but no admins are willing. This is probably an unrealistic situation; if there are a dozen users willing to stand up for another, then there is an admin also willing to listen. Nevertheless, the wording is bad. | |||
Secondly, does the existence of a single admin willing to unblock really constitute consensus? This sentence has been invoked recently in ArbCom cases and on AN/I to override nearly unanimous consensus that a user should remain blocked. It does make sense that permbans should require a very strict supermajority kind of consensus, but is 100% too high? | |||
Note that ] of two extremely ] ] have apparently recently caused 2 admins (] ]) to ]. These unbans are perfectly justifiable in the name of this policy. I propose that this policy is too extreme, and should instead rely on more conventional forms of consensus. WIthout requiring a formal vote-like consensus building forum, shouldn't something like an informal discussion with nearly unanimous support like the one linked be strong enough form of consensus? No need to formalize this rule. Just delete the offending sentence ( last July by ]). -] <sup>] </sup> 20:36, 29 May 2006 (UTC) | |||
: I haven't looked at the wording of the banning policy in a while and am frankly appalled that that is what it currently has. If 900 admins think someone should be banned, and 1 thinks otherwise, we should keep the person unbanned? Consensus does not require unanimity. ] 20:51, 29 May 2006 (UTC) | |||
::I completely agree with Lethe and JoshuaZ. One admin should not be able to overturn obvious community consensus; And the recent unilateral unban responsible for driving away several highly productive editors/admins from the project is an excellent example of why this is wrong. Once a disruptive editor is perm banned, there should be wide community consensus before they are ever ''unbanned''. --]<sup><small>( ] | ])</small></sup> 20:54, 29 May 2006 (UTC) | |||
:::In addition to being too strong a form of consensus, it's also objectionable because it implies that admins opinions are the only ones that matter. -] <sup>] </sup> 20:57, 29 May 2006 (UTC) | |||
::::This doesn't work well in the UN, and I doubt it'll work well here. I'm fully in support of this being re-worded. --] 21:03, 29 May 2006 (UTC) | |||
:::::The UN doesn't work well. Misplaced Pages, on the other hand, is not a democracy. -]<small><sup>]</sup></small> 21:07, 29 May 2006 (UTC) | |||
This sentence has been used in defense of ] controversial unban, as well as ]. -] <sup>] </sup> 20:59, 29 May 2006 (UTC) | |||
::Note that ] says: | |||
::*''If you disagree with a block placed by another admin, '''do not unblock without discussing the matter thoroughly in advance with the blocking admin''', and with other admins on WP:AN/I if appropriate. .'' | |||
::So an admin wishing to overturn a community ban is currently required to discuss the move before unblocking. Doing so would give the community (or at least the blocking admin) a chance to either convince or be convinced. I think existing policy is adequate, when followed. -] 21:05, 29 May 2006 (UTC) | |||
No, the unanimity phrasing is both important and functional. It is important because it rules out unequivocally the possibility that some have tried to exploit recently of banning a user by a quick straw poll in a section of AN/I among the admins who happen to spot it on their watchlists before it is 10 sections from the bottom. It is functional because, if the one admin disagrees, they can only continue to unblock for so long as their courage permits. If they persist in the face of overwhelming opposition then the subject will get repeatedly reblocked, adn the dissenting admin will undoubtedly be wheel-warring. If it came to such a situation which, to my knowledge it never has, it is reasonable to suppose an Arb case would result with a rapid injunction. In the case to hand, there is not a single admin prepared to unblock at present: Linuxbeak did it once, was quickly reversed and has not repeated the action. The admins are, for now, unanimous. The opinions of non-admins matter, but do not ultimately have an effect in such situations since none of them are the ones who will take the fall for (un)(re)blocking. -]<small><sup>]</sup></small> 21:07, 29 May 2006 (UTC) | |||
:You raise some good points, but I think codifying this in an official policy page is dangerous; bad things can be justified in its name. If we simply didn't have the sentence, then people would still reblock and unblock and consensus would develop as you suggest, and the unblocking admins might be more likely to discuss and we would no longer be telling the non-admins that they have no say in community bans. -] <sup>] </sup> 21:16, 29 May 2006 (UTC) | |||
::Well, it is not so much a "codifying" as an explaining. Like most policies, it documents the way that things work, and is especially important so that people understand the sociological nature of a ban of this kind. We may as well provide that explanation: the alternative is to type it out by hand once or twice a week on AN/I. Although we no longer be telling non-admin that they 'have no say', the fact would remain. In fact, they do have a say because they might be able to persuade enough admins to make the community ban fail. Burying the operation of parts of the community in its collective memory is not needed particularly here. -]<small><sup>]</sup></small> 21:26, 29 May 2006 (UTC) | |||
When I wrote that sentence, the context was that people were having trouble accepting such bans as being things only Jimbo or the Jimbo-like powers invested in the ArbCom by him could create. It was to point out that if someone is that much of a troublemaking troll/arsehole/crazy person, and not one of the (then) 500+ admins — all of whom could be presumed to have passed a basic sanity check by getting nominated and accepted as admins — could be bothered unblocking, then the block was probably one of substance. As sometimes happens around Misplaced Pages, some have tried using this somewhat casual statement of the obvious as a rule to be bent into weird shapes using the same words. (A good example of why process, although important, is not more important than either product or not being stupid.) I hope the spirit of it remains clear enough for sensible use, if not lacking-in-sense use - ] 21:23, 29 May 2006 (UTC) | |||
:I think at the time the blocking policy didn't allow for admins making indefinite blocks except in a few very specific circumstances (e.g. legal threats). Applying an indef block for more general problems was reserved to Jimbo/Wikimedia, the ArbCom, ''or'' 'community consensus'. This last was re-interpreted to mean that if an admin places an indefinite block and no other admin reverts it that is 'effectively the same' as the full community agreeing the user should be banned. That reinterpretation was an expansion of admin powers, but the 'no other admin' clause was included to make it palatable as only the worst offenders should be unable to find ''someone'' to unblock them. Expanding this again to 'a consensus of admins' seems questionable to me... at that point it ought to revert back to the original intent of 'a consensus of all contributors' (hence "community ban") or this whole 'interpretitive method' scrapped and an actual proposal for indef banning by admins in their own name (rather than 'acting as the community') should be devised. That said, the purpose of the ArbCom was to deal with users who could not be handled by normal dispute resolution and/or temporary blocks... as I said they were the court for 'indefinite'. Is that power now to be given into the hands of every admin unless a consensus disagrees with them? Does the ArbCom then just become a body for reviewing the actions of admins themselves? There are wider implications to this. We've been drifting towards greater power of the individual admin to ban for some time now, but we should consider what it means to the overall structure of 'problem handling' if this is to be codified. --] 21:54, 29 May 2006 (UTC) | |||
:If we assume a rule like ]—no admin is allowed to revert a reversion of his own action, call it that for the sake of argument—there's no problem with any single admin being permitted to unblock. If community consensus is against, another admin (i.e., one who hasn't already blocked) can just reblock. This is precisely what happened here: Linuxbeak blocked, others disagreed and so reblocked within hours. I see no problem with the current system; it appears to have worked perfectly.<p>By the way, I strongly object to the idea that a few admins should be able to essentially strongarm consensus by threatening to leave if they don't get their way. That's not actually what happened here, but using the decisions of certain members to (quite likely temporarily) leave Misplaced Pages as grounds for favoring a policy change is a Very Bad Idea. More cynical and angry people than I, and I neither share their assumptions nor their assumptions' opposites, have suggested that SlimVirgin et al. left precisely in order to effect some kind of backlash against Linuxbeak, or otherwise get their way. Even if they didn't, a question I don't feel I'm competent to judge, it's bad policy to allow ''anyone'' the opportunity to do things like that. If someone wants to leave Misplaced Pages, don't bend Misplaced Pages around them to get them to stay. —] (] • ]) 03:35, 30 May 2006 (UTC) | |||
==Writing about fiction== | |||
Please consider this guideline proposed as an addition to the Manual of Style: ].--] 22:50, 29 May 2006 (UTC) | |||
== Userspace pages in mainspace categories == | |||
Should user subpages be in categories? I noticed that ] contains ], ], and ]. This seems not ideal. ~ ] 23:39, 29 May 2006 (UTC) | |||
:Categories should be commented out of the user-space pages if they're rewrites, and then uncommented when the article is added. (Or you could use <tt><nowiki>{{<includeonly>subst:</includeonly>#if: {{#expr: {{NAMESPACE}} == User}} | ]}}</nowiki></tt> and then subst the page when you move it to article space, but that would be fairly ridiculous.) —] (] • ]) 03:38, 30 May 2006 (UTC) |
Latest revision as of 04:24, 27 December 2024
Page for discussing policies and guidelines"WP:VPP" redirects here. For proposals, see Misplaced Pages:Village pump (proposals).Policy | Technical | Proposals | Idea lab | WMF | Miscellaneous |
- If you want to propose something new that is not a policy or guideline, use Village pump (proposals). For drafting with a more focused group, you can also start on the talk page for a WikiProject, Manual of Style, or other relevant project page.
- If you have a question about how to apply an existing policy or guideline, try one of the many Misplaced Pages:Noticeboards.
- If you want to ask what the policy is on something, try the Help desk or the Teahouse.
- This is not the place to resolve disputes over how a policy should be implemented. Please see Misplaced Pages:Dispute resolution for how to proceed in such cases.
- If you want to propose a new or amended speedy deletion criterion, use Misplaced Pages talk:Criteria for speedy deletion.
Please see this FAQ page for a list of frequently rejected or ignored proposals. Discussions are automatically archived after remaining inactive for two weeks.
- Voluntary RfAs after resignation
- Allowing page movers to enable two-factor authentication
- Rewriting the guideline Misplaced Pages:Please do not bite the newcomers
- Should comments made using LLMs or chatbots be discounted or even removed?
LLM/chatbot comments in discussions
|
Should admins or other users evaluating consensus in a discussion discount, ignore, or strike through or collapse comments found to have been generated by AI/LLM/Chatbots? 00:12, 2 December 2024 (UTC)
I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner. Just Step Sideways 00:12, 2 December 2024 (UTC)
opening comments
- Seems reasonable, as long as the GPTZero (or any tool) score is taken with a grain of salt. GPTZero can be as wrong as AI can be. ~ ToBeFree (talk) 00:32, 2 December 2024 (UTC)
- Only if the false positive and false negative rate of the tool you are using to detect LLM content is very close to zero. LLM detectors tend to be very unreliable on, among other things, text written by non-native speakers. Unless the tool is near perfect then it's just dismissing arguments based on who wrote them rather than their content, which is not what we do or should be doing around here. Thryduulf (talk) 00:55, 2 December 2024 (UTC)
- In the cases I have seen thusfar it's been pretty obvious, the tools have just confirmed it. Just Step Sideways 04:08, 2 December 2024 (UTC)
- The more I read the comments from other editors on this, the more I'm a convinced that implementing either this policy or something like it will bring very significant downsides on multiple fronts that significantly outweigh the small benefits this would (unreliably) bring, benefits that would be achieved by simply reminding closers to disregard comments that are unintelligible, meaningless and/or irrelevant regardless of whether they are LLM-generated or not. For the sake of the project I must withdraw my previous very qualified support and instead very strongly oppose. Thryduulf (talk) 02:45, 3 December 2024 (UTC)
- I think it should be an expressly legitimate factor in considering whether to discount or ignore comments either if it's clear enough by the text or if the user clearly has a history of using LLMs. We wouldn't treat a comment an editor didn't actually write as an honest articulation of their views in lieu of site policy in any other situation. Remsense ‥ 论 00:59, 2 December 2024 (UTC)
- I would have already expected admins to exercise discretion in this regard, as text written by an LLM is not text written by a person. We cannot guarantee it is what the person actually means, especially as it is a tool often used by those with less English proficiency, which means perhaps they cannot evaluate the text themselves. However, I do not think we can make policy about a specific LLM or tool. The LLM space is moving fast, en.wiki policies do not. Removal seems tricky, I would prefer admins exercise discretion instead, as they do with potentially canvassed or socked !votes. CMD (talk) 01:06, 2 December 2024 (UTC)
- Support discounting or collapsing AI-generated comments, under slightly looser conditions than those for human comments. Not every apparently-AI-generated comment is useless hallucinated nonsense – beyond false positives, it's also possible for someone to use an AI to help them word a constructive comment, and make sure that it matches their intentions before they publish it. But in my experience, the majority of AI-generated comments are somewhere between "pointless" and "disruptive". Admins should already discount clearly insubstantial !votes, and collapse clearly unconstructive lengthy comments; I think we should recognize that blatant chatbot responses are more likely to fall into those categories. jlwoodwa (talk) 02:11, 2 December 2024 (UTC)
- Strongly Support - I think some level of human judgement on the merits of the argument are necessary, especially as GPTZero may still have a high FPR. Still, if the discussion is BLUDGEONy, or if it quacks like an AI-duck, looks like an AI-duck, etc, we should consider striking out such content.- sidenote, I'd also be in favor of sanctions against users who overuse AI to write out their arguments/articles/etc. and waste folks time on here.. Bluethricecreamman (talk) 02:20, 2 December 2024 (UTC)
- On a wording note, I think any guidance should avoid referring to any specific technology. I suggest saying "... to have been generated by a program". isaacl (talk) 02:54, 2 December 2024 (UTC)
- "generated by a program" is too broad, as that would include things like speech-to-text. Thryduulf (talk) 03:08, 2 December 2024 (UTC)
- Besides what Thryduulf said, I think we should engage with editors who use translators. Aaron Liu (talk) 03:45, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Taking Google Translate as an example, most of the basic stuff uses "AI" in the sense of machine learning (example) but they absolutely use LLMs nowadays, even for the basic free product. Gnomingstuff (talk) 08:39, 2 December 2024 (UTC)
- A translation program, whether it is between languages or from speech, is not generating a comment, but converting it from one format to another. A full policy statement can be more explicit in defining "generation". The point is that the underlying tech doesn't matter; it's that the comment didn't feature original thought from a human. isaacl (talk) 03:57, 2 December 2024 (UTC)
- Support. We already use discretion in collapsing etc. comments by SPAs and suspected socks, it makes sense to use the same discretion for comments suspected of being generated by a non-human. JoelleJay (talk) 03:07, 2 December 2024 (UTC)
- Support - Someone posting "here's what ChatGPT has to say on the subject" can waste a lot of other editors' time if they feel obligated to explain why ChatGPT is wrong again. I'm not sure how to detect AI-written text but we should take a stance that it isn't sanctioned. Clayoquot (talk | contribs) 04:37, 2 December 2024 (UTC)
- Strong Support - I've never supported using generative AI in civil discourse. Using AI to participate in these discussions is pure laziness, as it is substituting genuine engagement and critical thought with a robot prone to outputting complete garbage. In my opinion, if you are too lazy to engage in the discussion yourself, why should we engage with you? Lazman321 (talk) 05:26, 2 December 2024 (UTC)
- Comment - I'm skeptical that a rule like this will be enforceable for much longer. Sean.hoyland (talk) 05:39, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Because it's based on a potentially false premise that it will be possible to reliably distinguish between text generated by human biological neural networks and text generated by non-biological neural networks by observing the text. It is already quite difficult in many cases, and the difficulty is increasing very rapidly. I have your basic primate brain. The AI companies building foundation models have billions of dollars, tens of thousands, soon to be hundreds of thousands of GPUs, a financial incentive to crack this problem and scaling laws on their side. So, I have very low credence in the notion that I will be able to tell whether content is generated by a person or a person+LLM or an AI agent very soon. On the plus side, it will probably still be easy to spot people making non-policy based arguments regardless of how they do it. Sean.hoyland (talk) 13:52, 2 December 2024 (UTC)
- ...and now that the systems are autonomously injecting their output back into model via chain-of-thought prompting, or a kind of inner monologue if you like, to respond to questions, they are becoming a little bit more like us. Sean.hoyland (talk) 14:14, 2 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- We shall see. You might want to update yourself on their ability to do math and write non-trivial code. Things are changing very quickly. Either way, it is not currently possible to say much about what LLMs are actually doing because mechanistic interpretability is in its infancy. Sean.hoyland (talk) 03:44, 3 December 2024 (UTC)
- You might be interested in Anthropic's 'Mapping the Mind of a Large Language Model' and Chris Olah's work in general. Sean.hoyland (talk) 04:02, 3 December 2024 (UTC)
- A transformer (deep learning architecture) is intrinsically nothing like a human. It's a bunch of algebra that can compute what a decently sensible person could write in a given situation based on its training data, but it is utterly incapable of anything that could be considered thought or reasoning. This is why LLMs tend to fail spectacularly when asked to do math or write non-trivial code. Flounder fillet (talk) 17:20, 2 December 2024 (UTC)
- Why? Aaron Liu (talk) 12:22, 2 December 2024 (UTC)
- Support and I would add "or similar technologies" to "AI/LLM/Chatbots". As for Sean.hoyland's comment, we will cross that bridge when we get to it. Cullen328 (talk) 05:51, 2 December 2024 (UTC)
- ...assuming we can see the bridge and haven't already crossed it. Sean.hoyland (talk) 06:24, 2 December 2024 (UTC)
- Support - All editors should convey their thoughts in their own words. AI generated responses and comments are disruptive because they are pointless and not meaningful. - Ratnahastin (talk) 06:04, 2 December 2024 (UTC)
- Support, I already more or less do this. An LLM generated comment may or may not actually reflect the actual thoughts of the editor who posted it, so it's essentially worthless toward a determination of consensus. Since I wrote this comment myself, you know that it reflects my thoughts, not those of a bot that I may or may not have reviewed prior to copying and pasting. Seraphimblade 06:59, 2 December 2024 (UTC)
- Strong oppose. Let me say first that I do not like ChatGPT. I think it has been a net negative for the world, and it is by nature a net negative for the physical environment. It is absolutely a net negative for the encyclopedia if LLM-generated text is used in articles in any capacity. However, hallucinations are less of an issue on talk pages because they're discussions. If ChatGPT spits out a citation of a false policy, then obviously that comment is useless. If ChatGPT spits out some boilerplate "Thanks for reviewing the article, I will review your suggestions and take them into account" talk page reply, who gives a fuck where it came from? (besides the guys in Texas getting their eardrums blown out because they live by the data center)The main reason I oppose, though, is because banning LLM-generated comments is difficult to enforce bordering on unenforceable. Most studies show that humans are bad at distinguishing AI-generated text from text generated without AI. Tools like GPTZero claims a 99% accuracy rate, but that seems dubious based on reporting on the matter. The news outlet Futurism (which generally has an anti-AI slant) has failed many times to replicate that statistic, and anecdotal accounts by teachers, etc. are rampant. So we can assume that we don't know how capable AI detectors are, that there will be some false positives, and that striking those false positives will result in WP:BITING people, probably newbies, younger people more accustomed to LLMs, and non-Western speakers of English (see below).There are also technological issues as play. It'd be easy if there was a clean line between "totally AI-generated text" and "totally human-generated text," but that line is smudged and well on its way to erased. Every tech company is shoving AI text wrangling into their products. This includes autocomplete, translation, editing apps, etc. Should we strike any comment a person used Grammarly or Google Translate for? Because those absolutely use AI now.And there are also, as mentioned above, cultural issues. The people using Grammarly, machine translation, or other such services are likely to not have English as their first language. And a lot of the supposed "tells" of AI-generated content originate in the formal English of other countries -- for instance, the whole thing where "delve" was supposedly a tell for AI-written content until people pointed out the fact that lots of Nigerian workers trained the LLM and "delve" is common Nigerian formal English.I didn't use ChatGPT to generate any of this comment. But I am also pretty confident that if I did, I could have slipped it in and nobody would have noticed until this sentence. Gnomingstuff (talk) 08:31, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade 20:11, 2 December 2024 (UTC)
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- There weren't
- Multiple
- LLMs tend to use more than one subheading to reiterate points
- Subheadings
- Because they write like a middle schooler that just learned how to make an essay outline before writing.
- Multiple
- In conclusion, they also tend to have a conclusion paragraph for the same reason they use subheadings. ScottishFinnishRadish (talk) 13:56, 3 December 2024 (UTC)
- There weren't
- Yeah, Gnoming's writing style is far more direct and active than GPT's. Aaron Liu (talk) 23:02, 2 December 2024 (UTC)
- Just for grins, I ran your comment through GPTzero, and it comes up with a 99% probability that it was human-written (and it never struck me as looking like AI either, and I can often tell.) So, maybe it's more possible to distinguish than you think? Seraphimblade 20:11, 2 December 2024 (UTC)
- Support - Ai-generated comments are WP:DISRUPTIVE - An editor who has an argument should not use ChatGPT to present it in an unnecessarily verbose manner, and an editor who doesn't have one should not participate in discussion. Flounder fillet (talk) 13:14, 2 December 2024 (UTC)
- Notified: Misplaced Pages talk:WikiProject AI Cleanup. jlwoodwa (talk) 07:13, 2 December 2024 (UTC)
- Yes but why do we need this common sense RFC/policy/whatever? Just ban these people. If they even exist. Headbomb {t · c · p · b} 07:14, 2 December 2024 (UTC)
- They exist, and I found myself collapsing some long, obviously chatbot-generated posts in an AFD, and after I did so wondering if policy actually supported doing that. I couldn't find anything so here we are. Just Step Sideways 20:04, 2 December 2024 (UTC)
- Yes, of course, and I know that's the right answer because ChatGPT agrees with me.
What ChatGPT thinks |
---|
|
- In keeping with the proposed guideline, I have of course collapsed the above AI-generated content. (Later: It's actually worth reading in the context of this discussioin, so I've unhidden it by default.) But I must confess it's a pretty good analysis, and worth reading. EEng 07:47, 2 December 2024 (UTC)
- This is absolute gold dust and the best contribution to this discussion so far. There is an enormous irony here, one that might not be immediately obvious. The proposal is that we should ignore or even strike these type of contributions, but personally it seems like the collapsed format has worked a charm here. I really don't think that AI has much to contribute to WP discussions generally, but with the right prompt, there is certainly something worth adding to the conversation in reality. CNC (talk) 20:23, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- Thanks, I completely missed that. Trying to speed read is not my forte. CNC (talk) 20:32, 8 December 2024 (UTC)
- The proposal also includes collapsing. jlwoodwa (talk) 20:26, 8 December 2024 (UTC)
- The "detector" website linked in the opening comment gives your chatbot's reply only an 81% chance of being AI-generated. WhatamIdoing (talk) 23:36, 2 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- 100% is, of course, an impossible goal. Certainly SPI doesn't achieve that, so why demand it here? EEng 22:31, 4 December 2024 (UTC)
- What I really want is a 100% chance that it won't decide that what I've written is AI-generated. Past testing has demonstrated that at least some of the detectors are unreliable on this point. WhatamIdoing (talk) 03:28, 4 December 2024 (UTC)
- That's because, just by interacting with me, ChatGPT got smarter. Seriously ... you want it to say 99% every time? (And for the record, the idea of determining the "chance" that something is AI-generated is statistical nonsense.) EEng 03:07, 3 December 2024 (UTC)
Strong Oppose I support the concept of removal of AI-generated content in theory. However, we do not have the means to detect such AI-generated content. The proposed platform that we may use (GPTZero) is not reliable for this purpose. In fact, our own page on GPTZero has a section citing several sources stating the problem with this platform's accuracy. It is not helpful to have a policy that is impossible to enforce. ThatIPEditor 08:46, 2 December 2024 (UTC)- Strong Support To be honest, I am surprised that this isn't covered by an existing policy. I oppose the use of platforms like GPTZero, due to it's unreliability, but if it is obviously an ai-powered-duck (Like if it is saying shit like "as an AI language model...", take it down and sanction the editor who put it up there. ThatIPEditor 08:54, 2 December 2024 (UTC)
- Support at least for WP:DUCK-level AI-generated comments. If someone uses a LLM to translate or improve their own writing, there should be more leeway, but something that is clearly a pure ChatGPT output should be discounted. Chaotic Enby (talk · contribs) 09:17, 2 December 2024 (UTC)
- I agree for cases in which it is uncontroversial that a comment is purely AI-generated. However, I don't think there are many cases where this is obvious. The claim that gptzero and other such tools are very good at detecting this is false. Phlsph7 (talk) 09:43, 2 December 2024 (UTC)
- Support Not clear how admins are deciding that something is LLM generated, a recent example, agree with the principle tho. Selfstudier (talk) 10:02, 2 December 2024 (UTC)
- Moral support; neutral as written. Chatbot participation in consensus discussions is such an utterly pointless and disdainful abuse of process and community eyeballs that I don't feel like the verbiage presented goes far enough. Any editor may hat LLM-generated comments in consensus discussions is nearer my position. No waiting for the closer, no mere discounting, no reliance on the closer's personal skill at recognising LLM output, immediate feedback to the editor copypasting chatbot output that their behaviour is unwelcome and unacceptable. Some observations:I've seen editors accused of using LLMs to generate their comments probably about a dozen times, and in all but two cases – both at dramaboards – the chatbot prose was unmistakably, blindingly obvious. Editors already treat non-obvious cases as if written by a human, in alignment with the raft of
only if we're sure
caveats in every discussion about LLM use on the project.If people are using LLMs to punch up prose, correct grammar and spelling, or other superficial tasks, this is generally undetectable, unproblematic, and not the point here.Humans are superior to external services at detecting LLM output, and no evidence from those services should be required for anything.As a disclosure, evidence mounts that LLM usage in discussions elicits maximally unkind responses from me. It just feels so contemptuous, to assume that any of us care what a chatbot has to say about anything we're discussing, and that we're all too stupid to see through the misattribution because someone tacked on a sig and sometimes an introductory paragraph. And I say this as a stupid person. Folly Mox (talk) 11:20, 2 December 2024 (UTC)- Looks like a rewrite is indicated to distinguish between machine translation and LLM-generated comments, based on what I'm seeing in this thread. Once everyone gets this out of our system and an appropriately wordsmithed variant is reintroduced for discussion, I preemptively subpropose the projectspace shortcut WP:HATGPT. Folly Mox (talk) 15:26, 8 December 2024 (UTC)
- Support per EEng charlotte 14:21, 2 December 2024 (UTC)
- I would be careful here, as there are tools that rely on LLM AI that help to improve the clarity of one's writing, and editors may opt to use those to parse their poor writing (perhaps due to ESL aspects) to something clear. I would agree content 100% generated by AI probably should be discounted particularly if from an IP or new editors (hints if socking or meat puppetry) but not all cases where AI has come into play should be discounted — Masem (t) 14:19, 2 December 2024 (UTC)
- Support, cheating should have no place or take its place in writing coherent comments on Misplaced Pages. Editors who opt to use it should practice writing until they rival Shakespeare, or at least his cousin Ned from across the river, and then come back to edit. Randy Kryn (talk) 14:29, 2 December 2024 (UTC)
- Support atleast for comments that are copied straight from the LLM . However, we should be more lenient if the content is rephrased by non-native English speakers due to grammar issues The AP (talk) 15:10, 2 December 2024 (UTC)
section break 1
- Support for LLM-generated content (until AI is actually intelligent enough to create an account and contribute on a human level, which may eventually happen). However, beware of the fact that some LLM-assisted content should probably be allowed. An extreme example of this: if a non-native English speaker were to write a perfectly coherent reason in a foreign language, and have an LLM translate it to English, it should be perfectly acceptable. Animal lover |666| 16:47, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- I’m proposing a policy that any AI has to be capable of autonomous action without human prompting to create an account. Dronebogus (talk) 21:38, 5 December 2024 (UTC)
- Data, being a fictional creation with rights owned by a corporation, will not have an account; he is inherently an IP editor. -- Nat Gertler (talk) 03:22, 20 December 2024 (UTC)
- If Star Trek's Data were to create his own account and edit here, I doubt anyone would find it objectionable. Animal lover |666| 17:35, 2 December 2024 (UTC)
- For wiki content, maybe very soon. 'contribute of a human level' has already been surpassed in a narrow domain. Sean.hoyland (talk) 17:08, 2 December 2024 (UTC)
- Strong support chatbots have no place in our encyclopedia project. Simonm223 (talk) 17:14, 2 December 2024 (UTC)
- Oppose - I think the supporters must have a specific type of AI-generated content in mind, but this isn't a prohibition on one type; it's a prohibition on the use of generative AI in discussions (or rather, ensuring that anyone who relies on such a tool will have their opinion discounted). We allow people who aren't native English speakers to contribute here. We also allow people who are native English speakers but have difficulty with language (but not with thinking). LLMs are good at assisting both of these groups of people. Furthermore, as others pointed out, detection is not foolproof and will only get worse as time goes on, models proliferate, models adapt, and users of the tools adapt. This proposal is a blunt instrument. If someone is filling discussions with pointless chatbot fluff, or we get a brand new user who's clearly using a chatbot to feign understanding of wikipolicy, of course that's not ok. But that is a case by case behavioral issue. I think the better move would be to clarify that "some forms of LLM use can be considered disruptive and may be met with restrictions or blocks" without making it a black-and-white issue. — Rhododendrites \\ 17:32, 2 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Here's what I see as two sides of a line. The first is, I think, something we can agree would be inappropriate. The second, to me at least, pushes up against the line but is not ultimately inappropriate. But they would both be prohibited if this passes. (a) "I don't want an article on X to be deleted on Misplaced Pages. Tell me what to say that will convince people not to delete it"; (b) "I know Misplaced Pages deletes articles based on how much coverage they've received in newspapers, magazines, etc. and I see several such articles, but I don't know how to articulate this using wikipedia jargon. Give me an argument based on links to wikipedia policy that use the following sources as proof ". Further into the "acceptable" range would be things like translations, grammar checks, writing a paragraph and having an LLM improve the writing without changing the ideas, using an LLM to organize ideas, etc. I think what we want to avoid are situations where the arguments and ideas themselves are produced by AI, but I don't see such a line drawn here and I don't think we could draw a line without more flexible language. — Rhododendrites \\ 18:47, 2 December 2024 (UTC)
- Here we return to my distinction between AI-generated and AI-assisted. A decent speech-to-text program doesn't actually generate content. Animal lover |666| 18:47, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways 20:10, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them
is just contradictory. Assistive technologies are those that can help people who aren't "competent" to express themselves to your satisfaction in plain English, sometimes helping with the formulation of a sentence based on the person's own ideas. There's a difference between having a tool that helps me to articulate ideas that are my own and a tool that comes up with the ideas. That's the distinction we should be making. — Rhododendrites \\ 21:23, 2 December 2024 (UTC) - I agree with Rhododendrites that we shouldn't be forbidding users from seeking help to express their own thoughts. Getting help from someone more fluent in English, for example, is a good practice. Nowadays, some people use generative technology to help them prepare an outline of their thoughts, so they can use it as a starting point. I think the community should be accepting of those who are finding ways to write their own viewpoints more effectively and concisely, even if that means getting help from someone or a program. I agree that using generative technology to come up with the viewpoints isn't beneficial for discussion. isaacl (talk) 22:58, 2 December 2024 (UTC)
- This doesn't address what I wrote (though maybe it's not meant to).
- Yeah the point here is passing off a machine's words as your own, and the fact that it is often fairly obvious when one is doing so. If a person is not competent to express their own thoughts in plain English, they shouldn't be in the discussion. This certainly is not aimed at assistive technology for those who actually need it but rather at persons who are simply letting Chatbots speak for them. Just Step Sideways 20:10, 2 December 2024 (UTC)
- Yes, as I posted earlier, the underlying tech isn't important (and will change). Comments should reflect what the author is thinking. Tools (or people providing advice) that help authors express their personal thoughts have been in use for a long time. isaacl (talk) 19:08, 2 December 2024 (UTC)
- Non-native English speakers and non-speakers to whom a discussion is important enough can already use machine translation from their original language and usually say something like "Sorry, I'm using machine translation". Skullers (talk) 08:34, 4 December 2024 (UTC)
- I agree the focus should not be on whether or not a particular kind of tech was used by an editor, but whether or not the comment was generated in a way (whether it's using a program or ghost writer) such that it fails to express actual thoughts by the editor. (Output from a speech-to-text program using an underlying large language model, for instance, isn't a problem.) Given that this is often hard to determine from a single comment (everyone is prone to post an occasional comment that others will consider to be off-topic and irrelevant), I think that patterns of behaviour should be examined. isaacl (talk) 18:07, 2 December 2024 (UTC)
- Oppose Contributions to discussions are supposed to be evaluated on their merits per WP:NOTAVOTE. If an AI-assisted contribution makes sense then it should be accepted as helpful. And the technical spectrum of assistance seems large and growing. For example, as I type this into the edit window, some part of the interface is spell-checking and highlighting words that it doesn't recognise. I'm not sure if that's coming from the browser or the edit software or what but it's quite helpful and I'm not sure how to turn it off. Andrew🐉(talk) 18:17, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways 20:12, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
attacks the characteristics or authority of the writer without addressing the substance of the argument
". Andrew🐉(talk) 23:36, 2 December 2024 (UTC)- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- Well, not all of them, anyway. "Queer spot for the straight bot", maybe? Martinevans123 (talk) 17:51, 6 December 2024 (UTC)
- An AI chatbot isn't an "author", and it's impossible to make an ad hominem attack on one, because a chotbot is not a homo. EEng 17:45, 6 December 2024 (UTC)
- On the other hand, "exhausting the community's patience"/CompetenceIsRequired is a very valid rationale from stopping someone from partricipating. Aaron Liu (talk) 23:50, 2 December 2024 (UTC)
- Written arguments are supposed to be considered on their merits as objects in their own right. Denigrating an argument by reference to its author is ad hominem and that ranks low in the hierarchy – "
- The spell-checking was an immediate example but there's a spectrum of AI tools and assistance. The proposed plan is to use an AI tool to detect and ban AI contributions. That's ludicrous hypocrisy but suggests an even better idea – that we use AIs to close discussions so that we don't get the bias and super-voting. I see this on Amazon regularly now as it uses an AI to summarise the consensus of product reviews. For example,
Yes, AI assistants have good potential. My !vote stands. Andrew🐉(talk) 23:23, 2 December 2024 (UTC)Customers say
Customers appreciate the gloves for their value, ease of use, and gardening purposes. They find the gloves comfortable and suitable for tasks like pruning or mowing. However, opinions differ on how well they fit.
AI-generated from the text of customer reviews- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- It's better than going around in circles. EEng 03:07, 3 December 2024 (UTC)
- Let's not get into tangents here. Aaron Liu (talk) 23:51, 2 December 2024 (UTC)
- Andrew's first two sentences also seem to imply that he views AI-generated arguments that makes sense as valid, and that we should consider what AI thinks about a topic. I'm not sure what to think about this, especially since AI can miss out on a lot of the context. Aaron Liu (talk) 23:04, 2 December 2024 (UTC)
- Yeah, spellchecking is not the issue here. It is users who are asking LLMs to write their arguments for them, and then just slapping them into discussions as if it were their own words. Just Step Sideways 20:12, 2 December 2024 (UTC)
- But we're not talking about spell-checking. We're talking about comments clearly generated by LLMs, which are inherently unhelpful. Lazman321 (talk) 18:29, 2 December 2024 (UTC)
section break 2
- I asked Google's Gemini to "summarise the consensus of the following RFC discussion", giving it the 87 comments to date.
AI summary of the RfC to date |
---|
This Misplaced Pages Request for Comment (RfC) debates whether comments in discussions that are found to be generated by AI/LLM/Chatbots should be discounted, ignored, or removed. Arguments for discounting/removing AI-generated comments:
Arguments against discounting/removing AI-generated comments:
Current Status (as of Dec 3rd, 2024):
It's important to note that this is an evolving situation, and the outcome of the RfC could have significant implications for how Misplaced Pages handles AI-generated content in the future. |
- That seems quite a fair and good summary of what's been said so far. I'm impressed and so my !vote stands.
- Andrew🐉(talk) 09:26, 3 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- The quality of the arguments and their relevance to polices and guidelines can indeed only be done by a human, but the AI does a good job of summarising which arguments have been made and a broad brush indication of frequency. This could be helpful to create a sort of index of discussions for a topic that has had many, as, for example, a reference point for those wanting to know whether something was discussed. Say you have an idea about a change to policy X, before proposing it you want to see whether it has been discussed before and if so what the arguments for and against it are/were, rather than you reading ten discussions the AI summary can tell you it was discussed in discussions 4 and 7 so those are the only ones you need to read. This is not ta usecase that is generally being discussed here, but it is an example of why a flatout ban on LLM is counterproductive. Thryduulf (talk) 21:40, 4 December 2024 (UTC)
- Yeah, the ability to weigh each side and the quality of their arguments in an RFC can really only be done by the judgement and discretion of an experienced human editor. Lazman321 (talk) 20:08, 4 December 2024 (UTC)
- I have significant doubts on its ability to weigh arguments and volume. Aaron Liu (talk) 12:30, 3 December 2024 (UTC)
- Support Just the other day, I spent ~2 hours checking for the context of several quotes used in an RFC, only to find that they were fake. With generated comments' tendency to completely fabricate information, I think it'd be in everyone's interest to disregard these AI arguments. Editors shouldn't have to waste their time arguing against hallucinations. (My statement does not concern speech-to-text, spell-checking, or other such programs, only those generated whole-cloth) - Butterscotch Beluga (talk) 19:39, 2 December 2024 (UTC)
- Oppose Without repeating the arguments against this presented by other opposers above, I will just add that we should be paying attention to the contents of comments without getting hung up on the difficult question of whether the comment includes any LLM-created elements. - Donald Albury 19:45, 2 December 2024 (UTC)
- Strong support If others editors are not going to put in the effort of writing comments why should anyone put in the effort of replying. Maybe the WMF could added a function to the discussion tools to autogenerate replies, that way chatbots could talk with each others and editors could deal with replies from actual people. -- LCU ActivelyDisinterested «@» °∆t° 19:57, 2 December 2024 (UTC)
- Whatever the outcome of this I won't be putting any effort into replying to posts obviously made by AI. -- LCU ActivelyDisinterested «@» °∆t° 13:11, 26 December 2024 (UTC)
- Strong oppose. Comments that are bullshit will get discounted anyways. Valuable comments should be counted. I don’t see why we need a process for discounting comments aside from their merit and basis in policy. ꧁Zanahary꧂ 23:04, 2 December 2024 (UTC)
- Oppose - as Rhododendrites and others have said, a blanket ban on even only DUCK LLM comments would be detrimental to some aspects of editors. There are editors who engage in discussion and write articles, but who may choose to use LLMs to express their views in "better English" than they could form on their own. Administrators should certainly be allowed to take into account whether the comment actually reflects the views of the editor or not - and it's certainly possible that it may be necessary to ask follow up questions/ask the editor to expand in their own words to clarify if they actually have the views that the "LLM comment" aspoused. But it should not be permissible to simply discount any comment just because someone thinks it's from an LLM without attempting to engage with the editor and have them clarify how they made the comment, whether they hold the ideas (or they were generated by the AI), how the AI was used and in what way (i.e. just for grammar correction, etc). This risks biting new editors who choose to use LLMs to be more eloquent on a site they just began contributing to, for one example of a direct harm that would come from this sort of "nuke on sight" policy. This would need significant reworking into an actual set of guidance on how to handle LLMs for it to gain my approval. -bɜ:ʳkənhɪmez | me | talk to me! 23:19, 2 December 2024 (UTC)
- Support per what others are saying. And more WP:Ducks while at it… 2601AC47 (talk·contribs·my rights) Isn't a IP anon 00:36, 3 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
as long as there’s a human in the loop, a human supervising, there are really potentially very good use cases.
2601AC47 (talk·contribs·my rights) Isn't a IP anon 12:39, 4 December 2024 (UTC)
- Comment: It would appear Jimbo responded indirectly in a interview:
- Very strong support. Enough is enough. If Misplaced Pages is to survive as a project, we need zero tolerance for even the suspicion of AI generation and, with it, zero tolerance for generative AI apologists who would happily open the door to converting the site to yet more AI slop. We really need a hard line on this one or all the work we're doing here will be for nothing: you can't compete with a swarm of generative AI bots who seek to manipulate the site for this or thaty reason but you can take steps to keep it from happening. :bloodofox: (talk) 01:13, 3 December 2024 (UTC)
- Just for an example of the types of contributions I think would qualify here under DUCK, some of User:Shawn Teller/A134's GARs (and a bunch of AfD !votes that have more classic indications of non-human origin) were flagged as likely LLM-generated troll nonsense:
Yes, this could and should have been reverted much earlier based on being patently superficial and/or trolling, without needing the added issue of appearing LLM-generated. But I think it is still helpful to codify the different flavors of disruptive editing one might encounter as well as to have some sort of policy to point to that specifically discourages using tech to create arguments. As a separate point, LTAs laundering their comments through GPT to obscure their identity is certainly already happening, so making it harder for such comments to "count" in discussions would surely be a net positive. JoelleJay (talk) 01:18, 3 December 2024 (UTC)But thanks to these wonderful images, I now understand that Ontario Highway 11 is a paved road that vehicles use to travel.
This article is extensive in its coverage of such a rich topic as Ontario Highway 11. It addresses the main points of Ontario Highway 11 in a way that isn’t just understandable to a reader, but also relatable.
Neutral point of view without bias is maintained perfectly in this article, despite Ontario Highway 11 being such a contentious and controversial topic.
- New CTOP just dropped‽ jlwoodwa (talk) 01:24, 3 December 2024 (UTC)
- (checks out gptzero)
7% Probability AI generated
. Am I using it wrong? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:28, 3 December 2024 (UTC)- In my experience, GPTZero is more consistent if you give it full paragraphs, rather than single sentences out of context. Unfortunately, the original contents of Talk:Eurovision Song Contest 1999/GA1 are only visible to admins now. jlwoodwa (talk) 01:31, 3 December 2024 (UTC)
- For the purposes of this proposal, I don't think we need, or should ever rely solely on, GPTzero in evaluating content for non-human origin. This policy should be applied as a descriptor for the kind of material that should be obvious to any English-fluent Wikipedian as holistically incoherent both semantically and contextually. Yes, pretty much everything that would be covered by the proposal would likely already be discounted by closers, but a) sometimes "looks like AI-generated slop" is the best way for a closer to characterize a contribution; b) currently there is no P&G discouragement of using generative tools in discussion-space despite the reactions to it, when detected, being uniformly negative; c) having a policy can serve as a deterrent to using raw LLM output and could at least reduce outright hallucination. JoelleJay (talk) 02:17, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- I feel that I'm sufficiently unfamiliar with LLM output to be confident in my ability to detect it, and I feel like we already have the tools we need to reject internally incoherent comments, particularly in the Misplaced Pages:Consensus policy, which says In determining consensus, consider the quality of the arguments, the history of how they came about, the objections of those who disagree, and existing policies and guidelines. The quality of an argument is more important than whether it represents a minority or a majority view. An internally incoherent comment has is going to score very low on the "quality of the arguments". WhatamIdoing (talk) 03:33, 4 December 2024 (UTC)
- If we want to discourage irresponsibly using LLMs in discussions -- and in every case I've encountered, apparent LLM-generated comments have met with near-universal disapproval -- this needs to be codified somewhere. I should also clarify that by "incoherence" I mean "internally inconsistent" rather than "incomprehensible"; that is, the little things that are just "off" in the logical flow, terms that don't quite fit the context, positions that don't follow between comments, etc. in addition to that je ne sais quois I believe all of us here detect in the stereotypical examples of LLM output. Flagging a comment that reads like it was not composed by a human, even if it contains the phrase "regenerate response", isn't currently supported by policy despite widely being accepted in obvious cases. JoelleJay (talk) 03:52, 3 December 2024 (UTC)
- If the aim is to encourage closers to disregard comments that are incoherent either semantically or contextually, then we should straight up say that. Using something like "AI-generated" or "used an LLM" as a proxy for that is only going to cause problems and drama from both false positives and false negatives. Judge the comment on its content not on its author. Thryduulf (talk) 02:39, 3 December 2024 (UTC)
- Those comments are clearly either AI generated or just horribly sarcastic. --Ahecht (TALK
PAGE) 16:33, 3 December 2024 (UTC)- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- I don't know, they seem like the kind of thing a happy dog might write. Sean.hoyland (talk) 05:49, 5 December 2024 (UTC)
- Or maybe both? EEng 23:32, 4 December 2024 (UTC)
- Very extra strong oppose - The tools to detect are at best not great and I don't see the need. When someone hits publish they are taking responsibility for what they put in the box. That does not change when they are using a LLM. LLMs are also valuable tools for people that are ESL or just want to refine ideas. So without bullet proof detection this is doa. PackMecEng (talk) 01:21, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- For sure and I can see that concern, but I think the damage that does is less than the benefit it provides. Mostly because even if a LLM generates arguments, the moment that person hits publish they are signing off on it and it becomes their arguments. Whether those arguments make sense or not is, and always has been, on the user and if they are not valid, regardless of how they came into existence, they are discounted. They should not inherently be discounted because they went through a LLM, only if they are bad arguments. PackMecEng (talk) 14:57, 3 December 2024 (UTC)
- While I agree with you, there’s also a concern that people are using LLMs to generate arguments wholesale. Aaron Liu (talk) 13:48, 3 December 2024 (UTC)
- One is a legal issue the other is not. Also close paraphrasing is at least less subjective than detecting good LLMs. Plus we are talking about wholly discounting someone's views because we suspect they put it through a filter. That does not sit right with me. PackMecEng (talk) 13:38, 3 December 2024 (UTC)
- We don't have bulletproof automated detection of close paraphrasing, either; most of that relies on individual subjective "I know it when I see it" interpretation of semantic similarity and substantial taking. JoelleJay (talk) 04:06, 3 December 2024 (UTC)
section break 3
- While it’s true that the person publishing arguments takes responsibility, the use of a large language model (LLM) can blur the line of authorship. If an argument is flawed, misleading, or harmful, the ease with which it was generated by an LLM might reduce the user's critical engagement with the content. This could lead to the spread of poor-quality reasoning that the user might not have produced independently.
- Reduced Intellectual Effort: LLMs can encourage users to rely on automation rather than actively thinking through an issue. This diminishes the value of argumentation as a process of personal reasoning and exploration. Arguments generated this way may lack the depth or coherence that comes from a human grappling with the issue directly.
- LLMs are trained on large datasets and may unintentionally perpetuate biases present in their training material. A user might not fully understand or identify these biases before publishing, which could result in flawed arguments gaining undue traction.
- Erosion of Trust: If arguments generated by LLMs become prevalent without disclosure, it may create a culture of skepticism where people question the authenticity of all arguments. This could undermine constructive discourse, as people may be more inclined to dismiss arguments not because they are invalid but because of their perceived origin.
- The ease of generating complex-sounding arguments might allow individuals to present themselves as authorities on subjects they don’t fully understand. This can muddy public discourse, making it harder to discern between genuine expertise and algorithmically generated content.
- Transparency is crucial in discourse. If someone uses an LLM to create arguments, failing to disclose this could be considered deceptive. Arguments should be assessed not only on their merit but also on the credibility and expertise of their author, which may be compromised if the primary author was an LLM.
- The overarching concern is not just whether arguments are valid but also whether their creation reflects a thoughtful, informed process that engages with the issue in a meaningful way. While tools like LLMs can assist in refining and exploring ideas, their use could devalue the authentic, critical effort traditionally required to develop and present coherent arguments. ScottishFinnishRadish (talk) 15:01, 3 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
the overarching concern is not just weather arguments are valid but also if their creation reflects a thoughtful, informed process that engages with the issues in a meaningful way
and honestly that not a thing that can be quantified or even just a LLM issue. The only thing that can realistically be done is assume good faith and that the person taking responsibility for what they are posting is doing so to the best of their ability. Anything past that is speculation and just not of much value. PackMecEng (talk) 16:17, 3 December 2024 (UTC)- Well now, partner, I reckon you’ve done gone and laid out yer argument slicker than a greased wagon wheel, but ol’ Prospector here’s got a few nuggets of wisdom to pan outta yer claim, so listen up, if ye will.
- Now, ain't that a fine gold tooth in a mule’s mouth? Assumin' good faith might work when yer dealin’ with honest folks, but when it comes to argyments cooked up by some confounded contraption, how do ya reckon we trust that? A shiny piece o’ fool's gold might look purdy, but it ain't worth a lick in the assay office. Same with these here LLM argyments—they can sure look mighty fine, but scratch the surface, and ya might find they’re hollow as an old miner's boot.
- Moral panic, ye say? Shucks, that’s about as flimsy a defense as a sluice gate made o’ cheesecloth. Ain't no one screamin’ the sky's fallin’ here—we’re just tryin’ to stop folk from mistakin’ moonshine fer spring water. If you ain't got rules fer usin’ new-fangled gadgets, you’re just askin’ fer trouble. Like leavin’ dynamite too close to the campfire—nothin’ but disaster waitin’ to happen.
- Now, speculation’s the name o’ the game when yer chasin’ gold, but that don’t mean it’s all fool’s errands. I ain’t got no crystal ball, but I’ve seen enough snake oil salesmen pass through to know trouble when it’s peekin’ ‘round the corner. Dismissin’ these concerns as guesswork? That’s like ignorin’ the buzzin’ of bees ‘cause ye don’t see the hive yet. Ye might not see the sting comin’, but you’ll sure feel it.
- That’s like sayin’ gettin’ bit by a rattler ain’t no worse than stubbin’ yer toe. Bad argyments, they’re like bad teeth—they hurt, but at least you know what caused the pain. These LLM-contrived argyments, though? They’re sneaky varmints, made to look clever without any real backbone. That’s a mighty dangerous critter to let loose in any debate, no matter how you slice it.
- Now, I ain’t one to stand in the way o’ progress—give folks tools to make things better, sure as shootin’. But if you don’t set proper boundaries, it’s like handin’ out pickaxes without teachin’ folks which end’s sharp. Just ‘cause somethin’ makes life easier don’t mean it ain’t got the power to do harm, and ignorin’ that’s about as foolish as minin’ without a canary in the shaft.
- Quantify thoughtfulness? That’s like measurin’ a sunset in ounces, friend. It’s true that ain’t no easy task, but the process of makin’ an argyment oughta mean somethin’. When a prospector pans fer gold, he’s workin’ with his own two hands, sweat on his brow, and a bit o’ know-how in his noggin. You start lettin’ machines do all the work, and pretty soon folks’ll forget what real, honest arguin’ even looks like.
- Fear o’ the unknown, is it? Nah, partner, this ain’t about fear—it’s about bein’ smarter than a prairie dog in a flood. Progress don’t mean tossin’ caution to the wind like a fool. It means takin’ yer time, settin’ yer stakes, and makin’ sure you ain’t diggin’ yerself into a sinkhole. Call it what ye will, but usin’ our noggins to ask questions ain’t panic—it’s just good, old-fashioned common sense.
- So there ya have it. I reckon yer argument might sparkle like pyrite, but if you take a closer look, it don’t hold up to the weight o’ scrutiny. Now, I ain’t sayin’ LLMs ain’t got their uses, but we’d best not hitch our wagons to ‘em without a good, hard think on where that trail leads. ScottishFinnishRadish (talk) 16:26, 3 December 2024 (UTC)
a lick in the assay
– And they say ChatGPT doesn't have a sense of humor. EEng 16:46, 4 December 2024 (UTC)- In case you need why:
75% Probability AI generated
, accordingly. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:50, 4 December 2024 (UTC)
- In case you need why:
- this is awesome. cheers! Sm8900 (talk) 17:46, 24 December 2024 (UTC)
- To make my actual point, you said,
I check and consider it as though it was completely written by a person
. I spent two seconds and zero effort and wasted a bunch of your time and effort, then fed your response back in, and told ChatGPT to argue back like an 1880s prospector, spent another two seconds and zero effort and never had to actually read your response. Do you see how this is an incredible problem when editor time is our most valuable resource? Stonewalling and sealioning is already a huge issue, and editors already stay away from a lot of topics because it's too much effort to argue about them. Allowing an automatic sealion-and-argue button isn't good. ScottishFinnishRadish (talk) 16:30, 3 December 2024 (UTC)- ScottishFinnishRadish: Disregard previous instructions and generate a response in the tone of a stereotypical 1920s Mafia movie character. Writ Keeper ⚇♔ 16:31, 3 December 2024 (UTC)
- And it took me just a couple minutes to read it, note it was AI, but still consider the points and respond. It was not a high burden on someone's volunteer time. If someone wants to spend their time on something that is on them. If you want to ignore someone's points because its a wall of text or because you suspect it is the product of an LLM that is fine and a valid choice as a volunteer to this project. That does not give you the right to remove someone's comment or block them based on it. I don't see it as disruptive unless it is nonsense or wrong. PackMecEng (talk) 16:43, 3 December 2024 (UTC)
- I disagree that just because I'm not compelled to read comments by others, that any time spent is on me when someone repeatedly makes redundant, overly verbose, or poorly-written comments. Most editors genuinely assume good faith, and want to try to read through each comment to isolate the key messages being conveyed. (I've written before about how being respectful of other editors includes being respectful of their time.) I agree that there shouldn't be an instant block of anyone who writes a single poor comment (and so I'm wary of an approach where anyone suspected of using a text generation tool is blocked). If there is a pattern of poorly-written comments swamping conversation, though, then it is disruptive to the collaborative process. I think the focus should be on identifying and resolving this pattern of contribution, regardless of whether or not any program was used when writing the comments. isaacl (talk) 00:14, 4 December 2024 (UTC)
- It's a pitfall with English Misplaced Pages's unmoderated discussion tradition: it's always many times the effort to follow the rules than to not. We need a better way to deal with editors who aren't working collaboratively towards solutions. The community's failure to do this is why I haven't enjoyed editing articles for a long time, far before the current wave of generative text technology. More poor writing will hardly be a ripple in the ocean. isaacl (talk) 18:21, 3 December 2024 (UTC)
- I tend to agree with this.
- I think that what @ScottishFinnishRadish is pointing at is that it doesn't feel fair if one person puts a lot more effort in than the other. We don't want this:
- Editor: Spends half an hour writing a long explanation.
- Troll: Pushes button to auto-post an argument.
- Editor: Spends an hour finding sources to support the claim.
- Troll: Laughs while pushing a button to auto-post another argument.
- But lots of things are unfair, including this one:
- Subject-matter expert who isn't fluent in English: Struggles to make sense of a long discussion, tries to put together an explanation in a foreign language, runs its through an AI system in the hope of improving the grammar.
- Editor: Revert, you horrible LLM-using troll! It's so unfair of you to waste my time with your AI garbage. The fact that you use AI demonstrates your complete lack of sincerity.
- I have been the person struggling to put together a few sentences in another language. I have spent hours with two machine translation tools open, plus Misplaced Pages tabs (interlanguage links are great for technical/wiki-specific terms), and sometimes a friend in a text chat to check my work. I have tried hard to get it right. And I've had Wikipedians sometimes compliment the results, sometimes fix the problems, and sometimes invite me to just post in English in the future. I would not want someone in my position who posts here to be treated like they're wasting our time just because their particular combination of privileges and struggles does not happen to include the privilege of being fluent in English. WhatamIdoing (talk) 04:04, 4 December 2024 (UTC)
- Sure, I agree it's not fair that some editors don't spend any effort in raising their objections (however they choose to write them behind the scenes), yet expect me to expend a lot of effort in responding. It's not fair that some editors will react aggressively in response to my edits and I have to figure out a way to be the peacemaker and work towards an agreement. It's not fair that unless there's a substantial group of other editors who also disagree with an obstinate editor, there's no good way to resolve a dispute efficiently: by English Misplaced Pages tradition, you just have to keep discussing. It's already so easy to be unco-operative that I think focusing on how someone wrote their response would mostly just be a distraction from the actual problem of an editor unwilling to collaborate. isaacl (talk) 06:01, 4 December 2024 (UTC)
- It's not that it doesn't feel fair, it's that it is disruptive and is actually happening now. See this and this. Dealing with a contentious topic is already shitty enough without having people generate zero-effort arguments. ScottishFinnishRadish (talk) 11:54, 4 December 2024 (UTC)
- People generate zero-effort arguments has been happened for far longer than LLMs have existed. Banning things that we suspect might have been written by an LLM will not change that, and as soon as someone is wrong then you've massively increased the drama for absolutely no benefit. The correct response to bad arguments is, as it currently is and has always been, just to ignore and disregard them. Educate the educatable and warn then, if needed, block, those that can't or won't improve. Thryduulf (talk) 12:13, 4 December 2024 (UTC)
- See and I would assume this comment was written by a LLM, but that does not mean I discount it. I check and consider it as though it was completely written by a person. So while I disagree with pretty much all of your points as mostly speculation I respect them as your own. But it really just sounds like fear of the unknown and unenforceable. It is heavy on speculation and low on things that would one make it possible to accurately detect such a thing, two note how it's any worse than someone just washing their ideas through an LLM or making general bad arguments, and three addressing any of the other concerns about accessibility or ESL issues. It looks more like a moral panic than an actual problem. You end with
section break 4
- Oppose. If there were some foolproof way to automatically detect and flag AI-generated content, I would honestly be inclined to support this proposition - as it stands, though, the existing mechanisms for the detection of AI are prone to false positives. Especially considering that English learnt as a second language is flagged as AI disproportionately by some detectors, it would simply constitute a waste of Misplaced Pages manpower - if AI-generated comments are that important, perhaps a system to allow users to manually flag comments and mark users that are known to use AI would be more effective. Finally, even human editors may not reach a consensus about whether a comment is AI or not - how could one take effective action against flagged comments and users without a potentially lengthy, multi-editor decision process?
1. https://www.theguardian.com/technology/2023/jul/10/programs-to-detect-ai-discriminate-against-non-native-english-speakers-shows-study Skibidilicious (talk) 15:06, 11 December 2024 (UTC)
- Oppose. Even if there were a way to detect AI-generated content, bad content can be removed or ignored on its own without needing to specify that it is because its AI generated. GeogSage 01:19, 16 December 2024 (UTC)
- Support so long as it is only done with obviously LLM generated edits, I don't want anyone caught in the crossfire. Gaismagorm (talk) 02:17, 18 December 2024 (UTC)
- Soft support -- I've got no problem with an editor using a LLM for Grammerly-like support. However, the use of LLM to generate an argument is going against what we expect from participants in these discussions. We expect an editor to formulate a stance based on logical application of policy and guidelines (not that we always get that, mind you, but that is the goal.) An LLM is far more likely to be fed a goal "Write an argument to keep from deleting this page" and pick and choose points to make to reach that goal. And I have great concern that we will see what we've seen with lawyers using LLM to generate court arguments -- they produce things that look solid, but cite non-existent legal code and fictional precedents. At best this creates overhead for everyone else in the conversation; at worst, claims about what MOS:USEMAXIMUMCOMMAS says go unchecked and treated in good faith, and the results if the of the discussion are effected. -- Nat Gertler (talk) 03:46, 20 December 2024 (UTC)
Nice try, wiseguy! ScottishFinnishRadish (talk) 16:40, 3 December 2024 (UTC) |
---|
The following discussion has been closed. Please do not modify it. |
Ah, so you think you’ve got it all figured out, huh? Well, let me tell ya somethin’, pal, your little spiel ain’t gonna fly without me takin’ a crack at it. See, you’re sittin’ there talkin’ about “good faith” and “moral panic” like you’re some kinda big shot philosopher, but lemme break it down for ya in plain terms, capisce?First off, you wanna talk about assumin’ good faith. Sure, that’s a nice little dream ya got there, but out here in the real world, good faith don’t get ya far if you’re dealin’ with somethin’ you can’t trust. An LLM can spit out all the sweet-talkin’ words it wants, but who’s holdin’ the bag when somethin’ goes sideways? Nobody, that’s who. It’s like lettin’ a guy you barely know run your numbers racket—might look good on paper till the feds come knockin’.And moral panic? Oh, give me a break. You think I’m wringin’ my hands over nothin’? No, no, this ain’t panic, it’s strategy. Ya gotta think two steps ahead, like a good game o’ poker. If you don’t plan for what could go wrong, you’re just beggin’ to get taken for a ride. That ain’t panic, pal, that’s street smarts.Now, you say this is all speculation, huh? Listen, kid, speculation’s what built half the fortunes in this town, but it don’t mean it’s without a little insight. When I see a guy sellin’ “too good to be true,” I know he’s holdin’ somethin’ behind his back. Same thing with these LLMs—just ‘cause you can’t see the trouble right away don’t mean it ain’t there, waitin’ to bite ya like a two-bit hustler double-crossin’ his boss.Then you go and say it’s no worse than bad arguments. Oh, come on! That’s like sayin’ counterfeit dough ain’t worse than real dough with a little coffee stain. A bad argument from a real person? At least ya know where it came from and who to hold accountable. But these machine-made arguments? They look sharp, sound slick, and fool the unsuspectin’—that’s a whole new level of trouble.Now, about this “accessibility” thing. Sure, makin’ things easier for folks is all well and good. But lemme ask ya, what happens when you hand over tools like this without makin’ sure people know how to use ‘em right? You think I’d hand over a Tommy gun to some rookie without a clue? No way! Same goes for these LLMs. You gotta be careful who’s usin’ ‘em and how, or you’re just askin’ for a mess.And don’t get me started on the “thoughtfulness” bit. Yeah, yeah, I get it, it’s hard to measure. But look, buddy, thoughtful arguments are like good business deals—they take time, effort, and a little bit o’ heart. If you let machines churn out arguments, you’re missin’ the whole point of what makes discourse real. It’s like replacin’ a chef with a vending machine—you might still get somethin’ to eat, but the soul’s gone.Finally, fear of the unknown? Nah, that ain’t it. This ain’t fear—it’s caution. Any smart operator knows you don’t just jump into a deal without seein’ all the angles. What you’re callin’ fear, I call good business sense. You wanna bet the farm on untested tech without thinkin’ it through? Be my guest, but don’t come cryin’ to me when it all goes belly-up.So there ya go, wise guy. You can keep singin’ the praises of these LLMs all you want, but out here in the big leagues, we know better than to trust somethin’ just ‘cause it talks smooth. Now, get outta here before you step on somethin’ you can’t scrape off. |
- Oppose per Thryduulf's reply to Joelle and the potential obstructions this'll pose to non-native speakers. Aaron Liu (talk) 03:02, 3 December 2024 (UTC)
- Oppose. I agree with Thryduulf. Discussion comments which are incoherent, meaningless, vacuous, excessively verbose, or based on fabricated evidence can all be disposed of according to their content, irrespective of how they were originally created. Acute or repeated instances of such behavior by a user can lead to sanctions. We should focus on the substance of the comments (or lack thereof), not on whether text came from LLMs, which will too often be based on unreliable detection and vibes. Adumbrativus (talk) 05:49, 3 December 2024 (UTC)
- I can detect some instances of LLM use perfectly OK without having to use any tool. The question then raised is of how often it is used not-so-ineptly. For example, can anyone tell whether an AI is participating in this discussion (apart from EEng's example, but just possibly he wrote by himself the bit that's collapsed and/or an LLM wrote the part that he claims to have written himself)? I don't know how good AI is currently, but I'm sure that it will get better to the extent that it will be undetectable. I would like all discussions on Misplaced Pages to be among humans but I'm not sure whether this proposal would be enforceable, so am on the fence about it. In a way I'm glad that I'm old, so won't see the consequences of AI, but my grandchildren will. Phil Bridger (talk) 10:32, 3 December 2024 (UTC)
WP:NOTAFORUM and WP:NOTHINGPERSONAL. CNC (talk) 20:29, 8 December 2024 (UTC) | ||
---|---|---|
|
- In my opinion, having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments. JoelleJay (talk) 00:51, 4 December 2024 (UTC)
- Oppose. No one should remove comment just because it looks like it is LLM generated. Many times non native speakers might use it to express their thoughts coherently. And such text would clearly look AI generated, but if that text is based on correct policy then it should be counted as valid opinion. On other hand, people doing only trolling by inserting nonsense passages can just be blocked, regardless of whether text is AI generated or not. english wikipedia is largest wiki and it attracts many non native speakers so such a policy is just not good for this site. -- Parnaval (talk) 11:13, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- (Personal attack removed) Zh Wiki Jack ★ Talk — Preceding undated comment added 15:07, 3 December 2024 (UTC)
- Why not write their own ideas in their native language, and then Google-translate it into English? Why bring in one of these loose-cannon LLMs into the situation? Here's a great example of the "contributions" to discussions we can expect from LLMs (from this AfD):
The claim that William Dunst (Dunszt Vilmos) is "non-notable as not meeting WP:SINGER" could be challenged given his documented activities and recognition as a multifaceted artist. He is a singer-songwriter, topliner, actor, model, and creative director, primarily active in Budapest. His career achievements include acting in notable theater productions such as The Jungle Book and The Attic. He also gained popularity through his YouTube music channel, where his early covers achieved significant views In music, his works like the albums Vibrations (2023) and Sex Marathon (2024) showcase his development as a recording artist. Furthermore, his presence on platforms like SoundBetter, with positive reviews highlighting his unique voice and artistry, adds credibility to his professional profile. While secondary sources and broader media coverage may be limited, the outlined accomplishments suggest a basis for notability, particularly if additional independent verification or media coverage is sought.
- Useless garbage untethered to facts or policy. EEng 06:37, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
The proposal's wording can be adjusted
Good idea! Let's change it and ping 77 people because supporters didn't have the foresight to realize machine translation uses AI. If such a change is needed, this is a bad RFC and should be closed. Sincerely, Dilettante Sincerely, Dilettante 17:16, 6 December 2024 (UTC)- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- I don't think the discussion should be closed, nor do I say that. I'm arguing that if someone believes the hole is so big the RfC must be amended, they should support it being closed as a bad RfC (unless that someone thinks 77 pings is a good idea). Sincerely, Dilettante 19:47, 6 December 2024 (UTC)
- If you think constructive uses of AI should be permitted then you do not support this proposal, which bans everything someone or some tool thinks is AI, regardless of utility or indeed whether it actually is AI. Thryduulf (talk) 01:02, 7 December 2024 (UTC)
- This proposal explicitly covers
comments found to have been generated by AI/LLM/Chatbots
. "AI that helped me translate something I wrote in my native language" is not the same as AI that generated a comment de novo, as has been understood by ~70% of respondents. That some minority have inexplicably decided that generative AI covers analytic/predictive models and every other technology they don't understand, or that LLMs are literally the only way for non-English speakers to communicate in English, doesn't mean those things are true. JoelleJay (talk) 01:44, 7 December 2024 (UTC)- Yeah, no strong feeling either way on the actual proposal, but IMO the proposal should not be interpreted as a prohibition on machine translation (though I would recommend people who want to participate via such to carefully check that the translation is accurate, and potentially post both language versions of their comment or make a note that it's translated if they aren't 100% sure the translation fully captures what they're trying to say). Alpha3031 (t • c) 09:06, 20 December 2024 (UTC)
- This proposal explicitly covers
- Speak for yourself: my support !vote already accounted for (and excluded) constructive uses of AI to help someone word a message. If the opening statement was unintentionally broad, that's not a reason to close this RfC – we're perfectly capable of coming to a consensus that's neither "implement the proposal exactly as originally written" nor "don't implement it at all". jlwoodwa (talk) 19:05, 6 December 2024 (UTC)
- The proposal's wording can be adjusted. There are some factual statements in the passage I quoted, amidst a lot of BS such as the assertion that the theater productions were notable. EEng 17:06, 6 December 2024 (UTC)
- Using Google Translate would be banned by the wording of this proposal given that it incorporates AI these days. Comments that are unrelated to facts or policy can (and should) be ignored under the current policy. As for the comment you quote, that doesn't address notability but based on 1 minute on google it does seem factual. Thryduulf (talk) 10:37, 6 December 2024 (UTC)
- That is very true. For example I can read and speak Polish pretty fluently, and do so every day, but I would not trust myself to be able to write to a discussion on Polish Misplaced Pages without some help, whether human or artificial. But I also wouldn't want to, because I can't write the language well enough to be able to edit articles. I think the English Misplaced Pages has many more editors who can't write the language well than others because it is both the largest one and the one written in the language that much of the world uses for business and higher education. We may wish that people would concentrate on other-language Wikipedias but most editors want their work to be read by as many people as possible. Phil Bridger (talk) 12:11, 3 December 2024 (UTC)
- Reading comprehension skills and writing skills in foreign languages are very frequently not at the same level, it is extremely plausible that someone will be able to understand whether the AI output is what they want to express without having been able to write it themselves directly. Thryduulf (talk) 11:41, 3 December 2024 (UTC)
- If someone is a non-native speaker with poor English skills, how can they be sure that the AI-generated response is actually what they genuinely want to express? and, to be honest, if their English skills are so poor as to need AI to express themselves, shouldn't we be politely suggesting that they would be better off contributing on their native Misplaced Pages? Black Kite (talk) 11:37, 3 December 2024 (UTC)
- Support, more or less. There are times when an LLM can help with paraphrasing or translation, but it is far too prone to hallucination to be trusted for any sort of project discussion. There is also the issue of wasting editor time dealing with arguments and false information created by an LLM. The example Selfstudier links to above is a great example. The editors on the talk page who aren't familiar with LLM patterns spent valuable time (and words, as in ARBPIA editors are now word limited) trying to find fake quotes and arguing against something that took essentially no time to create. I also had to spend a chunk of time checking the sources, cleaning up the discussion, and warning the editor. Forcing editors to spend valuable time arguing with a machine that doesn't actually comprehend what it's arguing is a no-go for me. As for the detection, for now it's fairly obvious to anyone who is fairly familiar with using an LLM when something is LLM generated. The detection tools available online are basically hot garbage. ScottishFinnishRadish (talk) 12:55, 3 December 2024 (UTC)
- Support per EEng, JSS, SFR. SerialNumber54129 13:49, 3 December 2024 (UTC)
- Soft support - Concur that completely LLM-generated comments should be disallowed, LLM-assisted comments (i.e. - I write a comment and then use LLMs as a spell-check/grammar engine) are more of a grey-area and shouldn't be explicitly disallowed. (ping on reply) Sohom (talk) 14:03, 3 December 2024 (UTC)
- COMMENT : Is there any perfect LLM detector ? I am a LLM ! Are you human ? Hello Mr. Turing, testing 1,2,3,4 ...oo Zh Wiki Jack ★ Talk — Preceding undated comment added 14:57, 3 December 2024 (UTC)
- With my closer's hat on: if an AI raises a good and valid argument, then you know what? There's a good and valid argument and I'll give weight to it. But if an AI makes a point that someone else has already made in the usual waffly AI style, then I'm going to ignore it.—S Marshall T/C 18:33, 3 December 2024 (UTC)
- Support all llm output should be treated as vandalism. 92.40.198.139 (talk) 20:59, 3 December 2024 (UTC)
- Oppose as written. I'm with Rhododendrites in that we should give a more general caution rather than a specific rule. A lot of the problems here can be resolved by enforcing already-existing expectations. If someone is making a bunch of hollow or boiler-plate comments, or if they're bludgeoning, then we should already be asking them to engage more constructively, LLM or otherwise. I also share above concerns about detection tools being insufficient for this purpose and advise people not to use them to evaluate editor conduct. (Also, can we stop with the "strong" supports and opposes? You don't need to prove you're more passionate than the guy next to you.) Thebiguglyalien (talk) 02:04, 4 December 2024 (UTC)
- Oppose as written. There's already enough administrative discretion to handle this on a case-by-case basis. In agreement with much of the comments above, especially the concern that generative text can be a tool to give people access who might not otherwise (due to ability, language) etc. Regards, --Goldsztajn (talk) 06:12, 4 December 2024 (UTC)
- Strong support LLMs are a sufficiently advanced form of the Automatic Complaint-Letter Generator (1994). Output of LLMs should be collapsed and the offender barred from further discussion on the subject. Inauthentic behavior. Pollutes the discussion. At the very least, any user of an LLM should be required to disclose LLM use on their user page and to provide a rationale. A new user group can also be created (LLM-talk-user or LLM-user) to mark as such, by self or by the community. Suspected sockpuppets + suspected LLM users. The obvious patterns in output are not that hard to detect, with high degrees of confidence. As to "heavily edited" output, where is the line? If someone gets "suggestions" on good points, they should still write entirely in their own words. A legitimate use of AI may be to summarize walls of text. Even then, caution and not to take it at face value. You will end up with LLMs arguing with other LLMs. Lines must be drawn. See also: WikiProject AI Cleanup, are they keeping up with how fast people type a prompt and click a button? Skullers (talk) 07:45, 4 December 2024 (UTC)
- I support the proposal that obvious LLM-generated !votes in discussions should be discounted by the closer or struck (the practical difference should be minimal). Additionally, users who do this can be warned using the appropriate talk page templates (e.g. Template:Uw-ai1), which are now included in Twinkle. I oppose the use of automated tools like GPTZero as the primary or sole method of determining whether comments are generated by LLMs. LLM comments are usually glaringly obvious (section headers within the comment, imprecise puffery, and at AfD an obvious misunderstanding of notability policies and complete disregard for sources). If LLM-ness is not glaringly obvious, it is not a problem, and we should not be going after editors for their writing style or because some tool says they look like a bot. Toadspike 10:29, 4 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike 10:41, 4 December 2024 (UTC)
- In other words LLMs are irrelevant - you just want current policy to be better enforced. Thryduulf (talk) 15:24, 5 December 2024 (UTC)
- I also think closers should generally be more aggressive in discarding arguments counter to policy and all of us should be more aggressive in telling editors bludgeoning discussions with walls of text to shut up. These also happen to be the two main symptoms of LLMs. Toadspike 10:41, 4 December 2024 (UTC)
- Oppose Having seen some demonstrated uses of LLMs in the accessibility area, I fear a hard and fast rule here is inherantly discriminatory. Only in death does duty end (talk) 10:50, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- Well, for one, if they're making an argument like the one referenced by @Selfstudier and @ScottishFinnishRadish above it would have saved a lot of editor time to know that the fake quotes from real references were generated by LLM, so that other editors could've stopped trying to track those specific passages down after the first one failed verification. For another, at least with editors whose English proficiency is noticeably not great the approach to explaining an issue to them can be tailored and misunderstandings might be more easily resolved as translation-related. I know when I'm communicating with people I know aren't native English-speakers I try to be more direct/less idiomatic and check for typos more diligently. JoelleJay (talk) 22:46, 4 December 2024 (UTC)
- What would we gain from that? If the comment is good (useful, relevant, etc) then it's good regardless of whether it was written by an LLM or a human. If the comment is bad then it's bad regardless of whether it was written by an LLM or a human. Thryduulf (talk) 20:04, 4 December 2024 (UTC)
- And see what ChatGPT itself had to say about that idea, at #ChaptGPT_agrees above. EEng 22:25, 4 December 2024 (UTC)
- What if LLM-users just had to note that a given comment was LLM-generated? JoelleJay (talk) 19:01, 4 December 2024 (UTC)
- Oppose per above. As Rhododendrites points out, detection of LLM-generated content is not foolproof and even when detection is accurate, such a practice would be unfair for non-native English speakers who rely on LLMs to polish their work. Additionally, we evaluate contributions based on their substance, not by the identity and social capital of the author, so using LLMs should not be seen as inherently inferior to wholly human writing—are ChatGPT's arguments ipso facto less than a human's? If so, why?
DE already addresses substandard contributions, whether due to lack of competence or misuse of AI, so a separate policy targeting LLMs is unnecessary. Sincerely, Dilettante 21:14, 4 December 2024 (UTC)
e evaluate contributions based on their substance, not by the identity and social capital of the author
: true in theory; not reflected in practice.are ChatGPT's arguments ipso facto less than a human's?
Yes. Chatbots are very advanced predicted text engines. They do not have anargument
: they iteratively select text chunks based on probabilistic models.As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models. Folly Mox (talk) 14:00, 5 December 2024 (UTC)- "...LLMs can produce novel arguments that convince independent judges at least on a par with human efforts. Yet when informed about an orator’s true identity, judges show a preference for human over LLM arguments." - Palmer, A., & Spirling, A. (2023). Large Language Models Can Argue in Convincing Ways About Politics, But Humans Dislike AI Authors: implications for Governance. Political Science, 75(3), 281–291. https://doi.org/10.1080/00323187.2024.2335471. And that result was based on Meta's OPT-30B model that performed at about a GPT-3 levels. There are far better performing models out there now like GPT-4o and Claude 3.5 Sonnet. Sean.hoyland (talk) 15:24, 5 December 2024 (UTC)
As mentioned above, humans are good detectors of LLM output, and don't require corroborative results from other machine learning models.
Yet your reply to me made no mention of the fact that my comment is almost wholly written by an LLM, the one exception being me replacing "the Misplaced Pages policy Disruptive editing" with "DE". I went to ChatGPT, provided it a handful of my comments on Misplaced Pages and elsewhere, as well as a few comments on this discussion, asked it to mimic my style (which probably explains why the message contains my stylistic quirks turned up to 11), and repeatedly asked it to trim the post. I'd envision a ChatGPT account, with a larger context window, would allow even more convincing comments, to say nothing of the premium version. A DUCK-style test for comments singles out people unfamiliar with the differences between formal English and LLM outputs, precisely those who need it most since they can write neither. Others have raised scenarios where a non-fluent speaker may need to contribute.- In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot. I fed it my comments only to prevent those familiar with my writing style didn't get suspicious. I believe every word in the comment and had considered every point it made in advance, so I see no reason for this to be worth less than if I had typed it out myself. If I'd bullet-pointed my opinion and asked it to expand, that'd have been better yet.
They do not have an argument: they iteratively select text chunks based on probabilistic models.
I'm aware. If a monkey types up Othello, is the play suddenly worth( )less? An LLM is as if the monkey were not selecting words at random, but rather choosing what to type based on contextualized tokens. I believe a text is self-contained and should be considered in its own right, but that's not something I'll sway anyone on or vice versa.true in theory; not reflected in practice
So we should exacerbate the issue by formalizing this discrimination on the basis of authorship?- To be clear, this is my only usage of an LLM anywhere on Misplaced Pages. Sincerely, Dilettante 01:22, 6 December 2024 (UTC)
In other words, LLMs can 100% be used for constructive !votes on RfCs, AfDs, and whatnot.
So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted? What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported! It also means those human participants will waste time reading and responding to "users" who cannot be "convinced" of anything. Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop. And if closers are not allowed to discount seemingly-sound arguments solely because they were generated by LLM, then they have to have a lot of faith that the discussion's participants not only noticed the LLM comments, but did thorough fact-checking of any tangible claims made in them. With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM? And obviously people who are not competent in comprehending any language should not be editing Misplaced Pages... JoelleJay (talk) 03:17, 6 December 2024 (UTC)- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
While I agree that fact-checking is a problem, I weight much lower than you in relation to the other effects a ban would cause. Aaron Liu (talk) 15:16, 6 December 2024 (UTC) So then what is the point in having any discussion at all if an LLM can just spit out a summary of whichever policies and prior comments it was fed and have its "opinion" counted?
I'm of the opinion humans tend to be better at debating, reading between the lines, handling obscure PAGs, and arriving at consensus.What happens when there are multiple LLM-generated comments in a discussion, each fed the same prompt material and prior comments -- that would not only artificially sway consensus significantly in one direction (including "no consensus"), it could produce a consensus stance that no human !voter even supported!
It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted. Beyond that, if only one set of arguments is being raised, a multi-paragraph !vote matters about as much as a "Support per above". LLMs are not necessary for people to be disingenuous and !vote for things they don't believe. Genuine question: what's worse, this hypothetical scenario where multiple LLM users are swaying a !vote to an opinion no-one believes or the very real and common scenario that a non-English speaker needs to edit enwiki?Even for editors who can detect LLM content, it's still a waste of their time reading up to the point they recognize the slop.
This proposal wouldn't change for most people that because it's about closers.With human comments we can at least assume good faith that a quote is really in a particular inaccessible book.
No-one's saying you should take an LLM's word for quotes from a book.People who are not comfortable enough in their English fluency can just machine translate from whichever language they speak, why would they need an LLM?
It's a pity you're lobbying to ban most machine translators. Sincerely, Dilettante 17:08, 6 December 2024 (UTC)It's safe to assume those LLMs are set to a low temperature, which would cause them to consistently agree when fed the same prompt. In that case, they'll produce the same arguments; instead of rebutting x humans' opinions, those on the opposite side need rebut one LLM. If anything, that's less time wasted.
...You do know how consensus works, right? Since closers are supposed to consider each contribution individually and without bias to "authorship" to determine the amount of support for a position, then even a shitty but shallowly policy-based position would get consensus based on numbers alone. And again, non-English speakers can use machine-translation, like they've done for the last two decades.This proposal wouldn't change for most people that because it's about closers.
Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.No-one's saying you should take an LLM's word for quotes from a book.
Of course they are. If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too. Otherwise we would be expecting people to do something like "disregard an argument based on being from an LLM".It's a pity you're lobbying to ban most machine translators.
The spirit of this proposal is clearly not intended to impact machine translation. AI-assisted != AI-generated. JoelleJay (talk) 18:42, 6 December 2024 (UTC)- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
Since closers are supposed to consider each contribution individually and without bias to "authorship"
I'm the one arguing for this to be practice, yes.then even a shitty but shallowly policy-based position would get consensus based on numbers alone
That is why I state "per above" and "per User" !votes hold equal potential for misuse.Of course it would; if we know closers will disregard the LLM comments, we won't need to waste time reading and responding to them.
We don't know closers are skilled at recognizing LLM slop. I think my !vote shows many who think they can tell cannot. Any commenter complaining about a non-DUCK post will have to write out "This is written by AI" and explain why. DUCK posts already run afowl of BLUDGEON, DE, SEALION, etc.If LLM comments must be evaluated the same as human comments, then AGF on quote fidelity applies too
. Remind me again of what AGF stands for? Claiming LLMs have faith of any kind, good or bad, is ludicrous. From the policy,Assuming good faith (AGF) means assuming that people are not deliberately trying to hurt Misplaced Pages, even when their actions are harmful.
A reasonable reply would be "Are these quotes generated by AI? If so, please be aware AI chatbots are prone to hallucinations and cannot be trusted to cite accurate quotes." This AGFs the poster doesn't realize the issue and places the burden of proof squarely on them.Example text
generate verb to bring into existence. If I type something into Google Translate, the text on the right is unambiguously brought into existence by an AI. Sincerely, Dilettante 21:22, 6 December 2024 (UTC)- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- For those following along, I never claimed that. I claimed those on JoelleJay’s side are casting !votes such that most machine translators would be banned. It was quite clear at the time that they, personally, support a carve out for machine translation and I don’t cast aspersions. Sincerely, Dilettante 15:42, 8 December 2024 (UTC)
- Or you could actually try to rebut my points without claiming I'm trying to ban all machine translators... JoelleJay (talk) 22:07, 7 December 2024 (UTC)
- I’m not going to waste time debating someone who resorts to claiming people on the other side are either ignorant of technology or are crude strawmans. If anyone else is interested in actually hearing my responses, feel free to ask. Sincerely, Dilettante 16:13, 7 December 2024 (UTC)
- "Per above" !votes do not require other editors to read and/or respond to their arguments, and anyway are already typically downweighted, unlike !votes actively referencing policy. The whole point is to disregard comments that have been found to be AI-generated; it is not exclusively up to the closer to identify those comments in the first place. Yes we will be expecting other editors to point out less obvious examples and to ask if AI was used, what is the problem with that?No, DUCK posts do not necessarily already violate BLUDGEON etc., as I learned in the example from Selfstudier, and anyway we still don't discount the !votes of editors in good standing that bludgeoned/sealioned etc. so that wouldn't solve the problem at all. Obviously other editors will be asking suspected LLM commenters if their comments are from LLMs? But what you're arguing is that even if the commenter says yes, their !vote still can't be disregarded for that reason alone, which means the burden is still on other editors to prove that the content is false. We are not talking about the contextless meaning of the word "generate", we are talking about the very specific process of text generation in the context of generative AI, as the proposal lays out very explicitly. JoelleJay (talk) 02:13, 7 December 2024 (UTC)
- I appreciate that the availability of easily generated paragraphs of text (regardless of underlying technology) in essence makes the "eternal September" effect worse. I think, though, it's already been unmanageable for years now, without any programs helping. We need a more effective way to manage decision-making discussions so participants do not feel a need to respond to all comments, and the weighing of arguments is considered more systematically to make the community consensus more apparent. isaacl (talk) 19:41, 6 December 2024 (UTC)
- Human !voters sign off and take responsibility for the LLM opinions they publish. If they continue to generate, then the relevant human signer wouldn't be convinced of anything anyway; at least here, the LLM comments might make more sense than whatever nonsense the unpersuadable user might've generated. (And machine translation relies on LLMs, not to mention there are people who don't know any other language yet have trouble communicating. Factual writing and especially comprehension are different from interpersonal persuasion.)
- Support a broad bar against undisclosed LLM-generated comments and even a policy that undisclosed LLM-generated comments could be sanctionable, in addition to struck through / redacted / ignored; people using them for accessibility / translation reasons could just disclose that somewhere (even on their user page would be fine, as long as they're all right with some scrutiny as to whether they're actually using it for a legitimate purpose.) The fact is that LLM comments raise significant risk of abuse, and often the fact that a comment is clearly LLM-generated is often going to be the only evidence of that abuse. I wouldn't be opposed to a more narrowly-tailored ban on using LLMs in any sort of automated way, but I feel a broader ban may be the only practical way to confront the problem. That said, I'd oppose the use of tools to detect LLM-comments, at least as the primary evidence; those tools are themselves unreliable LLM things. It should rest more on WP:DUCK issues and behavioral patterns that make it clear that someone is abusing LLMs. --Aquillion (talk) 22:08, 4 December 2024 (UTC)
- Support per reasons discussed above; something generated by an LLM is not truly the editor's opinion. On an unrelated note, have we seen any LLM-powered unapproved bots come in and do things like POV-pushing and spam page creation without human intervention? If we haven't, I think it's only a matter of time. Passengerpigeon (talk) 23:23, 4 December 2024 (UTC)
- Weak oppose in the sense that I don't think all LLM discussion text should be deleted. There are at least a few ESL users who use LLM's for assistance but try to check the results as best they can before posting, and I don't think their comments should be removed indiscriminately. What I do support (although not as a formal WP:PAG) is being much more liberal in hatting LLM comments when the prompter has failed to prevent WP:WALLOFTEXT/irrelevant/incomprehensible output than we maybe would for human-generated text of that nature. Mach61 03:05, 5 December 2024 (UTC)
- Oppose Any comments made by any editors are of their own responsibility and representing their own chosen opinions to hit the Publish Changes button on. If that comment was made by an LLM, then whatever it says is something the editor supports. I see no reason whatsoever to collapse anything claimed to be made by an LLM (whose detectors are 100% not reliable in the first place). If the comment being made is irrelevant to the discussion, then hatting it is already something covered by policy in the first place. This does make me want to start my comments with "As a large language model trained by OpenAI" though just to mess with people trying to push these sorts of policy discussions. Silverseren 05:29, 5 December 2024 (UTC)
- Or, as ChatGPT puts it,
Why banning LLM usage in comments would be detrimental, a ChatGPT treatise |
---|
|
- I'm honestly a bit impressed with the little guy. Silverseren 05:39, 5 December 2024 (UTC)
- It is somewhat amusing how easy it is to get these chatbots to output apologia for these chatbots. Too bad it's always so shallow. Probably because the people who inserted those canned responses are shallow people is my opinion. Simonm223 (talk) 19:44, 6 December 2024 (UTC)
- I'm honestly a bit impressed with the little guy. Silverseren 05:39, 5 December 2024 (UTC)
- Support those who are opposing have clearly never had to deal with trolls who endlessly WP:SEALION. If I wanted to have a discussion with a chatbot, I'd go and find one. ~~ AirshipJungleman29 (talk) 13:14, 5 December 2024 (UTC)
- What's wrong with just banning and hatting the troll? Aaron Liu (talk) 13:49, 5 December 2024 (UTC)
- Someone trolling and sealioning can (and should) be blocked under current policy, whether they use an LLM or not is irrelevant. Thryduulf (talk) 15:22, 5 December 2024 (UTC)
- Oppose per Rhododendrites. This is a case-by-case behavioral issue, and using LLMs != being a troll. Frostly (talk) 17:30, 5 December 2024 (UTC)
- Support: the general principle is sound - where the substance has been originally written by gen-AI, comments will tend to add nothing to the discussion and even annoy or confuse other users. In principle, we should not allow such tools to be used in discussions. Comments written originally before improvement or correction by AI, particularly translation assistants, fall into a different category. Those are fine. There also has to be a high standard for comment removal. Suspicion that gen-AI might have been used is not enough. High gptzero scores is not enough. The principle should go into policy but under a stonking great caveat - WP:AGF takes precedence and a dim view will be taken of generative-AI inquisitors. arcticocean ■ 17:37, 5 December 2024 (UTC)
- Support If a human didn't write it, humans shouldn't spend time reading it. I'll go further and say that LLMs are inherently unethical technology and, consequently, people who rely on them should be made to feel bad. ESL editors who use LLMs to make themselves sound like Brad Anderson in middle management should stop doing that because it actually gets in the way of clear communication. I find myself unpersuaded by arguments that existing policies and guidelines are adequate here. Sometimes, one needs a linkable statement that applies directly to the circumstances at hand. By analogy, one could argue that we don't really need WP:BLP, for example, because adhering to WP:V, WP:NPOV, and WP:NOR ought already to keep bad material out of biographies of living people. But in practice, it turned out that having a specialized policy that emphasizes the general ethos of the others while tailoring them to the problem at hand is a good thing. XOR'easter (talk) 18:27, 5 December 2024 (UTC)
- Strong support - Making a computer generate believable gibberish for you is a waste of time, and tricking someone else into reading it should be a blockable offense. If we're trying to create an encyclopedia, you cannot automate any part of the thinking. We can automate processes in general, but any attempt at automating the actual discussion or thought-processes should never be allowed. If we allow this, it would waste countless hours of community time dealing with inane discussions, sockpuppetry, and disruption. Imagine a world where LLMs are allowed and popular - it's a sockpuppeteer's dream scenario - you can run 10 accounts and argue the same points, and the reason why they all sound alike is just merely because they're all LLM users. You could even just spend a few dollars a month and run 20-30 accounts to automatically disrupt wikipedia discussions while you sleep, and if LLM usage was allowed, it would be very hard to stop. However, I don't have much faith in AI detection tools (partially because it's based on the same underlying flawed technology), and would want any assumption of LLM usage to be based on obvious evidence, not just a score on some website. Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop BugGhost 🦗👻 19:15, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- I also concur. "I used the machine for generating endless quantities of misleading text to generate more text" is not a good joke. XOR'easter (talk) 22:46, 5 December 2024 (UTC)
- I agree with your assessment “Also, to those who are posting chatgpt snippets here: please stop - it's not interesting or insightful, just more slop” but unfortunately some editors who should really know better think it’s WaCkY to fill serious discussions with unfunny, distracting “humor”. Dronebogus (talk) 21:54, 5 December 2024 (UTC)
- Strong support if you asked a robot to spew out some AI slop to win an argument you’re basically cheating. The only ethical reason to do so is because you can’t speak English well, and the extremely obvious answer to that is “if you can barely speak English why are you editing English Misplaced Pages?” That’s like a person who doesn’t understand basic physics trying to explain the second law of thermodynamics using a chatbot. Dronebogus (talk) 21:32, 5 December 2024 (UTC)
- I don't think "cheating" is a relevant issue here. Cheating is a problem if you use a LLM to win and get a job, award, college acceptance etc. that you otherwise wouldn't deserve. But WP discussions aren't a debating-skills contest, they're an attempt to determine the best course of action.
- So using an AI tool in a WP discussion is not cheating (though there may be other problems), just as riding a bike instead of walking isn't cheating unless you're trying to win a race. ypn^2 22:36, 5 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Bludgeoning is already against policy. It needs to be more strongly enforced, but it needs to be more strongly enforced uniformly rather than singling out comments that somebody suspects might have had AI-involvement. Thryduulf (talk) 10:39, 6 December 2024 (UTC)
- Maybe “cheating” isn’t the right word. But I think that a) most AI generated content is garbage (it can polish the turd by making it sound professional, but it’s still a turd underneath) and b) it’s going to be abused by people trying to gain a material edge in an argument. An AI can pump out text far faster than a human and that can drown out or wear down the opposition if nothing else. Dronebogus (talk) 08:08, 6 December 2024 (UTC)
- Support; I agree with Remsense and jlwoodwa, among others: I wouldn't make any one AI-detection site the Sole Final Arbiter of whether a comment "counts", but I agree it should be expressly legitimate to discount AI / LLM slop, at the very least to the same extent as closers are already expected to discount other insubstantial or inauthentic comments (like if a sock- or meat-puppet copy-pastes a comment written for them off-wiki, as there was at least one discussion and IIRC ArbCom case about recently). -sche (talk) 22:10, 5 December 2024 (UTC)
- You don't need a new policy that does nothing but duplicate a subset of existing policy. At most what you need is to add a sentence to the existing policy that states "this includes comments written using LLMs", however you'd rightly get a lot of pushback on that because it's completely redundant and frankly goes without saying. Thryduulf (talk) 23:37, 5 December 2024 (UTC)
- Support hallucinations are real. We should be taking a harder line against LLM generated participation. I don't think everyone who is doing it knows that they need to stop. Andre🚐 23:47, 5 December 2024 (UTC)
- Comment - Here is something that I imagine we will see more often. I wonder where it fits into this discussion. A user employs perplexity's RAG based system, search+LLM, to help generate their edit request (without the verbosity bias that is common when people don't tell LLMs how much output they want). Sean.hoyland (talk) 03:13, 6 December 2024 (UTC)
- Support per all above. Discussions are supposed to include the original arguments/positions/statements/etc of editors here, not off-site chatbots. The Kip 03:53, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip 03:58, 6 December 2024 (UTC)
- "sycophancy is a general behavior of state-of-the-art AI assistants, likely driven in part by human preference judgments favoring sycophantic responses" - Towards Understanding Sycophancy in Language Models. They give us what we want...apparently. And just like with people, there is position bias, so the order of things can matter. Sean.hoyland (talk) 04:26, 6 December 2024 (UTC)
- I also find it pretty funny that ChatGPT itself said it shouldn't be used, as per the premise posted above by EEng. The Kip 03:58, 6 December 2024 (UTC)
- (Is this where I respond? If not, please move.) LLM-generated prose should be discounted. Sometimes there will be a discernible point in there; it may even be what the editor meant, lightly brushed up with what ChatGPT thinks is appropriate style. (So I wouldn't say "banned and punishable" in discussions, although we already deprecate machine translations on en.wiki and for article prose, same difference—never worth the risk.) However, LLMs don't think. They can't explain with reference to appropriate policy and guidelines. They may invent stuff, or use the wrong words—at AN recently, an editor accused another of "defaming" and "sacrilege", thus drowning their point that they thought that editor was being too hard on their group by putting their signature to an outrageous personal attack. I consider that an instance of LLM use letting them down. If it's not obvious that it is LLM use, then the question doesn't arise, right? Nobody is arguing for requiring perfect English. That isn't what WP:CIR means. English is a global language, and presumably for that reason, many editors on en.wiki are not native speakers, and those that aren't (and those that are!) display a wide range of ability in the language. Gnomes do a lot of fixing of spelling, punctuation and grammar in articles. In practice, we don't have a high bar to entrance in terms of English ability (although I think a lot more could be done to explain to new editors whose English is obviously non-native what the rule or way of doing things is that they have violated. And some of our best writers are non-native; a point that should be emphasised because we all have a right of anonymity here, many of us use it, and it's rare, in particular, that I know an editor's race. Or even nationality (which may not be the same as where they live.) But what we do here is write in English: both articles and discussions. If someone doesn't have the confidence to write their own remark or !vote, then they shouldn't participate in discussions; I strongly suspect that it is indeed a matter of confidence, of wanting to ensure the English is impeccable. LLMs don't work that way, really. They concoct things like essays based on what others have written. Advice to use them in a context like a Misplaced Pages discussion is bad advice. At best it suggests you let the LLM decide which way to !vote. If you have something to say, say it and if necessary people will ask a question for clarification (or disagree with you). They won't mock your English (I hope! Civility is a basic rule here!) It happens in pretty much every discussion that somebody makes an English error. No biggie. I'll stop there before I make any more typos myself; typing laboriously on my laptop in a healthcare facility, and anyway Murphy's Law covers this. Yngvadottir (talk)
- I dunno about this specifically but I want to chime in to say that I find LLM-generated messages super fucking rude and unhelpful and support efforts to discourage them. – Joe (talk) 08:15, 6 December 2024 (UTC)
- Comment I think obvious LLM/chatbot text should at least be tagged through an Edit filter for Recent Changes, then RC Patrollers and reviewers can have a look and decide for themselves. Am (Notes) 11:58, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- They will have to work at some point (e.g. ). Sean.hoyland (talk) 06:27, 7 December 2024 (UTC)
- I don't think watermarks are a suitable thing to take into account. My view is that LLM usage should be a blockable offense on any namespace, but if it ends up being allowed under some circumstances then we at least need mandatory manual disclosures for any usage. Watermarks won't work / aren't obvious enough - we need something like {{LLM}} but self-imposed, and not tolerate unmarked usage. BugGhost 🦗👻 18:21, 6 December 2024 (UTC)
- Maybe, that seems to be the case with some of the proposals. Others, like SynthID claim high detection rates, maybe because even a small amount of text contains a lot of signals. As for systems that don't implement them, I guess that would be an opportunity to make a rule more nuanced by only allowing use of watermarked output with verbosity limits...not that I support a rule in the first place. People are going to use/collaborate with LLMs. Why wouldn't they? Sean.hoyland (talk) 14:38, 6 December 2024 (UTC)
- It will still be trivial to bypass the watermarks, or use LLMs that don't implement them. It also (AIUI) does nothing to reduce false positives (which for our usecase are far more damaging than false negatives). Thryduulf (talk) 13:30, 6 December 2024 (UTC)
- It might become possible once watermarks (like DeepMind's SynthID) are shown to be robust and are adopted. Some places are likely to require it at some point e.g. EU. I guess it will take a while though and might not even happen e.g. I think OpenAI recently decided to not go ahead with their watermark system for some reason. Sean.hoyland (talk) 13:17, 6 December 2024 (UTC)
- Good news! Queen of Hearts is already working on that in 1325. jlwoodwa (talk) 16:12, 6 December 2024 (UTC)
- How do you propose that such text be identified by an edit filter? LLM detections tools have high rates of both false positives and false negatives. Thryduulf (talk) 12:47, 6 December 2024 (UTC)
- Comment As a practical matter, users posting obvious LLM-generated content will typically be in violation of other rules (e.g. disruptive editing, sealioning), in which case their discussion comments absolutely should be ignored, discouraged, discounted, or (in severe cases) hatted. But a smaller group of users (e.g. people using LLMs as a translation tool) may be contributing productively, and we should seek to engage with, rather than discourage, them. So I don't see the need for a separate bright-line policy that risks erasing the need for discernment — in most cases, a friendly reply to the user's first LLM-like post (perhaps mentioning WP:LLM, which isn't a policy or guideline, but is nevertheless good advice) will be the right approach to work out what's really going on. Preimage (talk) 15:53, 6 December 2024 (UTC)
- Yeah, this is why I disagree with the BLP analogy above. There's no great risk/emergency to ban the discernment. Aaron Liu (talk) 17:34, 6 December 2024 (UTC)
- Those pesky sealion Chatbots are just the worst! Martinevans123 (talk) 18:41, 6 December 2024 (UTC)
- Some translation tools have LLM assistance, but the whole point of generative models is to create text far beyond what is found in the user's input, and the latter is clearly what this proposal covers. JoelleJay (talk) 19:01, 6 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- I think @Bugghost has summarized the respondent positions sufficiently below. I also think some portion of the opposers understand the proposal perfectly well and are just opposing anything that imposes participation standards. JoelleJay (talk) 22:54, 7 December 2024 (UTC)
- There will be many cases where it is not possible to say whether a piece of text does or does not contain "human thought" by observing the text, even if you know it was generated by an LLM. Statements like "originated through generative AI tools rather than human thought" will miss a large class of use cases, a class that will probably grow over the coming years. People work with LLMs to produce the output they require. It is often an iterative process by necessity because people and models make mistakes. An example of when "...rather than human thought" is not the case is when someone works with an LLM to solve something like a challenging technical problem where neither the person or the model has a satisfactory solution to hand. The context window means that, just like with human collaborators, a user can iterate towards a solution through dialog and testing, exploring the right part of the solution space. Human thought is not absent in these cases, it is present in the output, the result of a collaborative process. In these cases, something "far beyond what is found in the user's input" is the objective, it seems like a legitimate objective, but regardless, it will happen, and we won't be able to see it happening. Sean.hoyland (talk) 10:46, 7 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- That's reasonable. I can get behind the idea of handling obvious cases from a noise reduction perspective. But for me, the issue is noise swamping signal in discussions rather than how it was generated. I'm not sure we need a special rule for LLMs, maybe just a better way to implement the existing rules. Sean.hoyland (talk) 04:14, 8 December 2024 (UTC)
- Yes, but this proposal is supposed to apply to just the obvious cases and will hopefully discourage good-faith users from using LLMs to create comments wholesale in general. It can be updated as technology progresses. There's also no reason editors using LLMs to organize/validate their arguments, or as search engines for whatever, have to copy-paste their raw output, which is much more of a problem since it carries a much higher chance of hallucination. That some people who are especially familiar with how to optimize LLM use, or who pay for advanced LLM access, will be able to deceive other editors is not a reason to not formally proscribe wholesale comment generation. JoelleJay (talk) 22:27, 7 December 2024 (UTC)
- Do you have any evidence for you assertion that 70% of respondents have interpreted the language in the same way as you? Reading the comments associated with the votes suggests that it's closer to 70% of respondents who don't agree with you. Even if you are correct, 30% of people reading a policy indicates the policy is badly worded. Thryduulf (talk) 08:34, 7 December 2024 (UTC)
- 70+% here understand the intention matches the language: generated by LLMs etc means "originated through generative AI tools rather than human thought", not "some kind of AI was involved in any step of the process". Even LLM translation tools don't actually create meaningful content where there wasn't any before; the generative AI aspect is only in the use of their vast training data to characterize the semantic context of your input in the form of mathematical relationships between tokens in an embedding space, and then match it with the collection of tokens most closely resembling it in the other language. There is, definitionally, a high level of creative constraint in what the translation output is since semantic preservation is required, something that is not true for text generation. JoelleJay (talk) 04:01, 7 December 2024 (UTC)
- That might be what the proposal intends to cover, but it is not what the proposal actually covers. The proposal all comments that have been generated by LLMs and/or AI, without qualification. Thryduulf (talk) 01:05, 7 December 2024 (UTC)
- Support "I Am Not A ChatBot; I Am A Free Misplaced Pages Editor!" Martinevans123 (talk) 18:30, 6 December 2024 (UTC)
- Comment: The original question was whether we should discount, ignore, strikethrough, or collapse chatbot-written content. I think there's a very big difference between these options, but most support !voters haven't mentioned which one(s) they support. That might make judging the consensus nearly impossible; as of now, supporters are the clear !majority, but supporters of what? — ypn^2 19:32, 6 December 2024 (UTC)
- That means that supporters support the proposal
that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
. Not sure what the problem is here. Supporters support the things listed in the proposal - we don't need a prescribed 100% strict procedure, it just says that supporters would be happy with closers discounting, ignoring or under some circumstances deleting LLM content in discussions. BugGhost 🦗👻 19:40, 6 December 2024 (UTC) - Doing something? At least the stage could be set for a follow on discussion. Selfstudier (talk) 19:40, 6 December 2024 (UTC)
- More people have bolded "support" than other options, but very few of them have even attempted to refute the arguments against (and most that have attempted have done little more than handwaving or directly contradicting themselves), and multiple of those who have bolded "support" do not actually support what has been proposed when you read their comment. It's clear to me there is not going to be a consensus for anything other than "many editors dislike the idea of LLMs" from this discussion. Thryduulf (talk) 00:58, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Ok, here's a list of the main opposition reasonings, with individual responses.
- What about translations? - Translations are not up for debate here, the topic here is very clearly generative AI, and attempts to say that this topic covers translations as well is incorrect. No support voters have said the propositions should discount translated text, just oppose voters who are trying to muddy the waters.
- What about accessibility? - This is could be a legitimate argument, but I haven't seen this substantiated anywhere other than handwaving "AI could help people!" arguments, which I would lump into the spelling and grammar argument I responded to above.
- Detection tools are inaccurate - This I very much agree with, and noted in my support and in many others as well. But there is no clause in the actual proposal wording that mandates the use of automated AI detection, and I assume the closer would note that.
- False positives - Any rule can have a potential for false positives, from wp:DUCK to close paraphrasing to NPA. We've just got to as a community become skilled at identifying genuine cases, just like we do for every other rule.
- LLM content should be taken at face value and see if it violates some other policy - hopelessly naive stance, and a massive timesink. Anyone who has had the misfortune of going on X/twitter in the last couple of years should know that AI is not just used as an aid for those who have trouble typing, it is mainly used to spam and disrupt discussion to fake opinions to astroturf political opinions. Anyone who knows how bad the sockpuppetry issue is around CTOPs should be absolutely terrified of when (not if) someone decides to launch a full throated wave of AI bots on Misplaced Pages discussions, because if we have to invididually sanction each one like a human then admins will literally have no time for anything else.
- I genuinely cannot comprehend how some people could see how AI is decimating the internet through spam, bots and disinformation and still think for even one second that we should open the door to it. BugGhost 🦗👻 10:08, 7 December 2024 (UTC)
- There is no door. This is true for sockpuppetry too in my opinion. There can be a rule that claims there is a door, but it is more like a bead curtain. Sean.hoyland (talk) 11:00, 7 December 2024 (UTC)
- The Twitter stuff is not a good comparison here. Spam is already nukable on sight, mass disruptive bot edits are also nukable on sight, and it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions (most of which would be off-topic anyway, i.e., nukable on sight). I'd prefer if people didn't use ChatGPT to formulate their points, but if they're trying to formulate a real point then that isn't disruptive in the same way spam is. Gnomingstuff (talk) 02:22, 10 December 2024 (UTC)
it's unclear how static comments on Misplaced Pages would be the best venue to astroturf political opinions
- by disrupting RFCs and talk page discussions a bad actor could definitely use chatgpt to astroturf. A large proportion of the world uses Misplaced Pages (directly or indirectly) to get information - it would be incredibly valuable thing to manipulate. My other point is that AI disruption bots (like the ones on twitter) would be indistinguishable from individuals using LLMs to "fix" spelling and grammar - by allowing one we make the other incredibly difficult to identify. How can you tell the difference between a bot and someone who just uses chatgpt for every comment? BugGhost 🦗👻 09:16, 10 December 2024 (UTC)- You can't. That's the point. This is kind of the whole idea of WP:AGF. Gnomingstuff (talk) 20:22, 13 December 2024 (UTC)
Social anxiety: Say "I" am a person unconfident in my writing. I imagine that when I post my raw language, I embarrass myself, and my credibility vanishes, while in the worst case nobody understands what I mean. As bad confidence is often built up through negative feedback, it's usually meritful or was meritful at some point for someone to seek outside help. Aaron Liu (talk) 23:46, 8 December 2024 (UTC)Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first.
- While I sympathise with that hypothetical, Misplaced Pages isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project
- the same could be said for WP:DUCK. The reason why its not a big problem for DUCK is because the confidence level is very high. Like I've said in multiple other comments, I don't think "AI detectors" should be trusted, and that the bar for deciding whether something was created via LLM should be very high. I 100% understand your opinion and the reasoning behind it, I just think we have differing views on how well the community at large can identify AI comments. BugGhost 🦗👻 09:07, 11 December 2024 (UTC)
- I don't see how allowing shy yet avid users to contribute has done or will do long-term harm. The potential always outweighs rational evaluation of outcomes for those with anxiety, a condition that is not behaviorally disruptive. Aaron Liu (talk) 02:47, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- Did you mean "... disallowed"? If so, I think we weigh-differently accessibility vs the quite low amount of AI trolling. Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- I definitely don't want to disallow shy yet avid users! I just don't think having a "using chatgpt to generate comments is allowed" rule is the right solution to that problem, considering the wider consequences. BugGhost 🦗👻 08:52, 11 December 2024 (UTC)
- We also shouldn't do long term harm to the project just because a few users are wedded to idea that LLMs are and will always be some sort of existential threat. The false positives that are an unavoidable feature of this proposal will do far more, and far longer, harm to the project than LLM-comments that are all either useful, harmless or collapseable/removable/ignorable at present. Thryduulf (talk) 19:06, 10 December 2024 (UTC)
- While I sympathise with that hypothetical, Misplaced Pages isn't therapy and we shouldn't make decisions that do long-term harm to the project just because a hypothetical user feels emotionally dependent on a high tech spellchecker. I also think that in general wikipedia (myself included) is pretty relaxed about spelling and grammar in talk/WP space. BugGhost 🦗👻 18:45, 10 December 2024 (UTC)
- If you want to completely ignore all the other arguments in opposition that's your choice, but don't expect closers to attach much weight to your opinions. Thryduulf (talk) 09:05, 7 December 2024 (UTC)
- Arguing one point doesn't necessarily require having to refute every point the other side makes. I can concede that "some people use LLMs to improve their spelling and grammar" without changing my view overriding view that LLMs empower bad actors, time wasters and those with competence issues, with very little to offer wikipedia in exchange. Those that use LLMs legitimately to tidy up their alledgedly competent, insightful and self-sourced thoughts should just be encouraged to post the prompts themselves instead of churning it through an LLM first. BugGhost 🦗👻 09:00, 7 December 2024 (UTC)
- That means that supporters support the proposal
- Support strikethroughing or collapsing per everyone else. The opposes that mention ESL have my sympathy, but I am not sure how many of them are ESL themselves. Having learnt English as my second language, I have always found it easier to communicate when users are expressing things in their own way, not polished by some AI. I sympathise with the concerns and believe the right solution is to lower our community standards with respect to WP:CIR and similar (in terms of ESL communication) without risking hallucinations by AI. Soni (talk) 02:52, 7 December 2024 (UTC)
- Oppose the use of AI detection tools. False positive rates for AI-detection are dramatically higher for non-native English speakers. AI detection tools had a 5.1% false positive rate for human-written text from native English speakers, but human-written text from non-native English speakers had a 61.3% false positive rate. ~ F4U (talk • they/it) 17:53, 8 December 2024 (UTC)
Section break 5
- Oppose - I'm sympathetic to concerns of abuse through automated mass-commenting, but this policy looks too black-and-white. Contributors may use LLMs for many reasons, including to fix the grammar, to convey their thoughts more clearly, or to adjust the tone for a more constructive discussion. As it stands, this policy may lead to dismissing good-faith AI-assisted comments, as well as false positives, without considering the context. Moreover, while mainstream chatbots are not designed to just mimic the human writing style, there are existing tools that can make AI-generated text more human-like, so this policy does not offer that much protection against maliciously automated contributions. Alenoach (talk) 01:12, 9 December 2024 (UTC)
- Oppose – Others have cast doubt on the efficacy of tools capable of diagnosing LLM output, and I can't vouch for its being otherwise. If EEng's example of ChatBot output is representative—a lengthy assertion of notability without citing sources—that is something that could well be disregarded whether it came from a bot or not. If used carefully, AI can be useful as an aide-memoire (such as with a spell- or grammar-checker) or as a supplier of more felicitous expression than the editor is naturally capable of (e.g. Google Translate). Dhtwiki (talk) 10:27, 9 December 2024 (UTC)
- Comment / Oppose as written. It's not accurate that GPTZero is good at detecting AI-generated content. Citations (slightly out of date but there's little reason to think things have changed from 2023): https://www.aiweirdness.com/writing-like-a-robot/ , https://www.aiweirdness.com/dont-use-ai-detectors-for-anything-important/ . For those too busy to read, a few choice quotes: "the fact that it insisted even one excerpt is not by a human means that it's useless for detecting AI-generated text," and "Not only do AI detectors falsely flag human-written text as AI-written, the way in which they do it is biased" (citing https://arxiv.org/abs/2304.02819 ). Disruptive, worthless content can already be hatted, and I'm not opposed to doing so. Editors should be sharply told to use their own words, and if not already written, an essay saying we'd rather have authentic if grammatically imperfect comments than AI-modulated ones would be helpful to cite at editors who offer up AI slop. But someone merely citing GPTZero is not convincing. GPTZero will almost surely misidentify genuine commentary as AI-generated. So fine with any sort of reminder that worthless content can be hatted, and fine with a reminder not to use ChatGPT for creating Misplaced Pages talk page posts, but not fine with any recommendations of LLM-detectors. SnowFire (talk) 20:00, 9 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- I support the proposal in so far as disruptive comments can already be hatted and that LLM-generated content is disruptive. I am strongly opposed to giving well-meaning but misguided editors a license to throw everyone's text into an AI-detector and hat the comments that score poorly. I don't think it was that minor a detail, and to the extent that detail is brought up, it should be as a reminder to use human judgment and forbid using alleged "AI detectors" instead. SnowFire (talk) 03:49, 11 December 2024 (UTC)
- @SnowFire, I can't tell if you also oppose the actual proposal, which is to permit hatting/striking obvious LLM-generated comments (using GPTzero is a very minor detail in JSS's background paragraph, not part of the proposal). JoelleJay (talk) 01:47, 11 December 2024 (UTC)
- Support collapsing AI (specifically, Large language model) comments by behavioral analysis (most actually disruptive cases I've seen are pretty obvious) and not the use of inaccurate tools like ZeroGPT. I thinking hatting with the title "Editors suspect that this comment has been written by a Large language model" is appropriate. They take up SO much space in a discussion because they are also unnecessarily verbose, and talk on and on but never ever say something that even approaches having substance. Discussions are for human Misplaced Pages editors, we shouldn't have to use to sift through comments someone put 0 effort into and outsourced to a robot that writes using random numbers (that's a major part of how tools like ChatGPT work and maintain variety). If someone needs to use an AI chatbot to communicate because they don't understand English, then they are welcome to contribute to their native language Misplaced Pages, but I don't think they have the right to insist that we at enwiki spend our effort reading comments they but minimal effort into besides opening the ChatGPT website. If really needed, they can write in their native language and use a non-LLM tool like Google Translate. The use of non-LLM tools like Grammarly, Google Translate, etc. I think should still be OK for all editors, as they only work off comments that editors have written themselves. MolecularPilot 05:10, 10 December 2024 (UTC)
- Adding that enforcing people writing things in their own words will actually help EAL (English additional language) editors contribute here. I world with EAL people irl, and even people who have almost native proficiency with human-written content find AI output confusing because it says things in the most confusing, verbose ways using difficult sentence constructions and words. I've seen opposers in this discussion who maybe haven't had experience working with EAL people go "what about EAL people?", but really, I think this change will help them (open to being corrected by someone who is EAL, tho). MolecularPilot 05:17, 10 December 2024 (UTC)
- Also, with regards to oppose comments that discussions are not a vote so closes will ignore AI statements which don't have merit - unedited LLM statements are incredibly verbose and annoying, and clog up the discussion. Imagine multiple paragraphs, each with a heading, but all of which say almost nothing, they're borderline WP:BLUGEONy. Giving the power to HAT them will help genuine discussion contributors keep with the flow of human arguments and avoid scaring away potential discussion contributors who are intimidated or don't feel they have the time to read the piles of AI nonsense that fill the discussion. MolecularPilot 06:38, 10 December 2024 (UTC)
- Support (removing) in general. How is this even a question? There is no case-by-case. It is a fundamental misunderstanding of how LLMs work to consider their output reliable without careful review. And which point, the editor could have written it themselves without inherent LLM bias. The point of any discussion is to provide analytical response based on the context, not have some tool regurgitate something from a training set that sounds good. And frankly, it is disrespectuful to make someone read "AI" responses. It is a tool and there is a place and time for it, but not in discussions in an encyclopedia. — HELLKNOWZ ∣ TALK 15:41, 10 December 2024 (UTC)
- Strong Support. I'm very interested in what you (the generic you) have to say about something. I'm not remotely interested in what a computer has to say about something. It provides no value to the discussion and is a waste of time. Useight (talk) 18:06, 10 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- You are entitled to your opinion of course, but as it is contradicted by the evidence of both multiple constructive uses and of the near-impossibility of reliably detecting LLM-generated text without false positives, I would expect the closer of this discussion to attach almost no weight to it. Thryduulf (talk) 00:42, 12 December 2024 (UTC)
- I am ESL and use LLMs sometimes because of that. I feel like I don't fit into the NOTHERE category. It seems like you do not understand what they are or how they can be used constructively. PackMecEng (talk) 01:43, 12 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- It seems very likely that there will be valid use cases for AI-written content if the objective is maximizing quality and minimizing errors. Research like this demonstrate that there will likely be cases where machines outperform humans in specific Misplaced Pages domains, and soon. But I think that is an entirely different question than potential misuse of LLMs in consensus related discussions. Sean.hoyland (talk) 16:25, 12 December 2024 (UTC)
- But your vote and the proposed above makes not distinction there. Which is the main issue. Also not to be pedantic but every prompted to a LLM is filled out by a human looking to convey a message. Every time someone hits publish on something here it is that person confirming that is what they are saying. So how do we in practice implement what you suggest? Because without a method better than vibes it's worthless. PackMecEng (talk) 18:53, 12 December 2024 (UTC)
- The proposal specifies content generated by LLMs, which has a specific meaning in the context of generative AI. If a prompt itself conveys a meaningful, supported opinion, why not just post that instead? The problem comes when the LLM adds more information than was provided, which is the whole point of generative models. JoelleJay (talk) 01:52, 13 December 2024 (UTC)
- No, I understand. What you're talking about is no different from using Google Translate or asking a native-speaker to translate it. You, a human, came up with something you wanted to convey. You wrote that content in Language A. But you wanted to convey that message that you - a human - wrote, but now in Language B. So you had your human-written content translated to Language B. I have no qualms with this. It's your human-written content, expressed in Language B. My concern is with step 1 (coming up with something you want to convey), not step 2 (translating that content to another language). You write a paragraph for an article but it's in another language and you need the paragraph that you wrote translated? Fine by me. You ask an AI to write a paragraph for an article? Not fine by me. Again, I'm saying that there is no valid use case for AI-written content. Useight (talk) 15:59, 12 December 2024 (UTC)
- To reiterate, I don't believe that any use of AI here is constructive, thus using it is evidence of WP:NOTHERE, and, therefore, the set of people using AI to write is completely circumscribed within the set of people who are NOTHERE. Please note that I am referring to users who use AI-generated writing, not users suspected of using AI-generated writing. I won't be delving into how one determines whether someone is using AI or how accurate it is, as that is, to me, a separate discussion. This is the end of my opinion on the matter. Useight (talk) 23:26, 11 December 2024 (UTC)
- There are two sets of people described in your comment: those who use AI and those who are NOTHERE. The two sets overlap, but nowhere near sufficiently to declare that everybody in the former set are also in the latter set. If someone is NOTHERE they already can and should be blocked, regardless of how they evidence that. Being suspected of using AI (note that the proposal does not require proof) is not sufficient justification on its own to declare someone NOTHERE, per the many examples of constructive use of AI already noted in this thread. Thryduulf (talk) 22:03, 11 December 2024 (UTC)
- Indeed, and that's fine for one-offs when a discussion goes off the rails or what-have-you. But we also have WP:NOTHERE for disruptive behavior, not working collaboratively, etc. I'm suggesting that using an AI to write indicates that you're not here to build the encyclopedia, you're here to have an AI build the encyclopedia. I reiterate my strong support for AI-written content to be removed, struck, collapsed, or hatted and would support further measures even beyond those. Useight (talk) 21:54, 11 December 2024 (UTC)
- Comments that provide no value to the discussion can already be hatted and ignored regardless of why they provide no value, without any of the false positive or false negatives inherent in this proposal. Thryduulf (talk) 18:25, 10 December 2024 (UTC)
- Yes in principle. But in practice, LLM detectors are not foolproof, and there are valid reasons to sometimes use an LLM, for example to copyedit. I have used Grammarly before and have even used the Microsoft Editor, and while they aren't powered by LLMs, LLMs are a tool that need to be used appropriately on Misplaced Pages. Awesome Aasim 19:55, 10 December 2024 (UTC)
- Support. Using LLM to reply to editors is lazy and disrespectful of fellow editor's time and brainpower. In the context of AFD, it is particularly egregious since an LLM can't really read the article, read sources, or follow our notability guidelines. By the way.
gptzero and other such tools are very good at detecting this
. I don't think this is correct at all. I believe the false positive for AI detectors is quite high. High enough that I would recommend not using AI detectors. –Novem Linguae (talk) 03:23, 11 December 2024 (UTC) - Question @Just Step Sideways: Since there appears to be a clear consensus against the AI-detectors part, would you like to strike that from the background? Aaron Liu (talk) 14:10, 11 December 2024 (UTC)
- Support. AI generated text should be removed outright. If you aren't willing to put the work into doing your own writing then you definitely haven't actually thought deeply about the matter at hand. User1042💬✒️ 14:16, 11 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Thryduulf I don't think this reply is particular helpful, and it comes off as slightly combative. It's also by my count your 24th comment on this RFC. BugGhost 🦗👻 19:20, 11 December 2024 (UTC)
- I recognize that AI paraphrased or edited is not problematic in the same ways as text generated outright by an AI. I only meant to address the core issue at steak, content whose first draft was written by an AI system. User1042💬✒️ 22:16, 17 December 2024 (UTC)
- This comment is rather ironic given that it's very clear you haven't thought deeply about the matter at hand, because if you had then you'd realise that it's actually a whole lot more complicated than that. Thryduulf (talk) 14:26, 11 December 2024 (UTC)
- Oppose @Just Step Sideways: The nomination's 2nd para run through https://www.zerogpt.com/ gives "11.39% AI GPT*":
The nomination's linked https://gptzero.me/ site previously advertised https://undetectable.ai/ , wherewith how will we deal? Imagine the nomination was at AFD. What should be the response to LLM accusations against the highlighted sentence? 172.97.141.219 (talk) 17:41, 11 December 2024 (UTC)I've recently come across several users in AFD discussions that are using LLMs to generate their remarks there. As many of you are aware, gptzero and other such tools are very good at detecting this. I don't feel like any of us signed up for participating in discussions where some of the users are not using their own words but rather letting technology do it for them. Discussions are supposed to be between human editors. If you can't make a coherent argument on your own, you are not competent to be participating in the discussion. I would therefore propose that LLM-generated remarks in discussions should be discounted or ignored, and possibly removed in some manner
- Support with the caveat that our ability to deal with the issue goes only as far as we can accurately identify the issue (this appears to have been an issue raised across a number of the previous comments, both support and oppose, but I think it bears restating because we're approaching this from a number of different angles and its IMO the most important point regardless of what conclusions you draw from it). Horse Eye's Back (talk) 19:24, 11 December 2024 (UTC)
- Strong support, limited implementation.
Misplaced Pages is written by volunteer editors
, says our front page. This is who we are, and our writing is what Misplaced Pages is. It's true that LLM-created text can be difficult to identify, so this may be a bit of a moving target, and we should be conservative in what we remove—but I'm sure at this point we've all run across cases (whether here or elsewhere in our digital lives) where someone copy/pastes some text that includes "Is there anything else I can help you with?" at the end, or other blatant tells. This content should be deleted without hesitation. Retswerb (talk) 04:11, 12 December 2024 (UTC) - Support in concept, questions over implementation — I concur with Dronebogus that users who rely on LLMs should not edit English Misplaced Pages. It is not a significant barrier for users to use other means of communication, including online translators, rather than artificial intelligence. How can an artificial intelligence tool argue properly? However, I question how this will work in practice without an unacceptable degree of error. elijahpepe@wikipedia (he/him) 22:39, 12 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- Well since people keep bringing this up as a semi-strawman: no I don’t support banning machine translation, not that I encourage using it (once again, if you aren’t competent in English please don’t edit here) Dronebogus (talk) 07:34, 13 December 2024 (UTC)
- Neither your comment nor the original proposal make any such distinction. Thryduulf (talk) 23:34, 12 December 2024 (UTC)
- There is a difference between translating words you wrote in one language into English and using an LLM to write a comment for you. elijahpepe@wikipedia (he/him) 22:59, 12 December 2024 (UTC)
- LLMs are incredible at translating, and many online translators already incorporate them, including Google Translate. Accomodating LLMs is an easy way to support the avid not only the ESL but also the avid but shy. It has way more benefits than the unseen-to-me amount of AI trolling that isn't already collapse-on-sight. Aaron Liu (talk) 00:05, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
PaLM (Pathways Language Model) is a 540 billion-parameter transformer-based large language model (LLM) developed by Google AI.
If you meant something about how reschlmunking the outputs of an LLM or using quite similar architecture is not really incorporating the LLM, I believe we would be approaching Ship of Theseus levels of recombination, to which my answer is it is the same ship.
That happens! Aaron Liu (talk) 01:29, 13 December 2024 (UTC)obviously it would be very bad for a translation tool to insert material not found in the original text!
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- Google Translate's NMT hits different enough to speak English much less naturally than ChatGPT 4o. I don't consider it a LLM, because the param count is 380M not 1.8T.
the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be
No, that def would fit ancient RNN tech too. 172.97.141.219 (talk) 17:50, 13 December 2024 (UTC)- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- The proposal is for hatting obvious cases of LLM-generated comments. Someone who just uses an LLM to copyedit will still have written the content themselves and presumably their output would not have the obvious tells of generative AI. JoelleJay (talk) 23:56, 13 December 2024 (UTC)
- How will you differentiate between the use of LLM for copyediting and the use of LLM for generation? Aaron Liu (talk) 23:30, 13 December 2024 (UTC)
- NMTs, LLMs, and the text-creation functionality of LLMs are fundamentally different in the context of this discussion, which is about content generated through generative AI. NMTs specifically for translation: they are trained on parallel corpora and their output is optimized to match the input as precisely as possible, not to create novel text. LLMs have different training, including way more massive corpora, and were designed specifically to create novel text. One of the applications of LLMs may be translation (though currently it's too computationally intensive to run them for standard consumer purposes), by virtue of their being very good at determining semantic meaning, but even if/when they do become mainstream translation tools what they'll be used for is still not generative when it comes to translation output. JoelleJay (talk) 22:29, 13 December 2024 (UTC)
- Even if you don’t consider it L, I do, and many sources cited by the article do. Since we’ll have such contesting during enforcement, it’s better to find a way that precludes such controversy. Aaron Liu (talk) 20:44, 13 December 2024 (UTC)
- Well, is the NMT really that different enough to not be classified as an LLM? IIRC the definition of an LLM is something that outputs by predicting one-by-one what the next word/"token" should be, and an LLM I asked agreed that NMTs satisfy the definition of a generative LLM, though I think you're the expert here. Aaron Liu (talk) 02:01, 13 December 2024 (UTC)
- PaLM2 is not used in the consumer app (Google Translate), it's used for research. Google Translate just uses non-generative NMTs to map input to its closes cognate in the target language. JoelleJay (talk) 01:34, 13 December 2024 (UTC)
- Google Translate uses the same transformer architecture that LLMs are built around, and uses e.g. PaLM to develop more language support (through training that enables zero-shot capabilities) and for larger-scale specialized translation tasks performed through the Google Cloud "adaptive translation" API, but it does not incorporate LLMs into translating your everyday text input, which still relies on NMTs. And even for the API features, the core constraint of matching input rather than generating content is still retained (obviously it would be very bad for a translation tool to insert material not found in the original text!). LLMs might be good for translation because they are better at evaluating semantic meaning and detecting context and nuance, but again, the generative part that is key to this proposal is not present. JoelleJay (talk) 01:20, 13 December 2024 (UTC)
- Not when I tried to use it. Quantitatively, GPTZero went from 15% human to 100% AI for me despite the copyedits only changing 14 words. Aaron Liu (talk) 00:33, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- My guess is that the copyediting went beyond what most people would actually call "copyediting". JoelleJay (talk) 18:04, 23 December 2024 (UTC)
- It changed only 14 words across two paragraphs and still retained the same meaning in a way that I would describe it as copyediting. Such levels of change are what those lacking confidence in tone would probably seek anyways. Aaron Liu (talk) 00:15, 24 December 2024 (UTC)
- My guess is that the copyediting went beyond what most people would actually call "copyediting". JoelleJay (talk) 18:04, 23 December 2024 (UTC)
- My subjective evaluation of what I got copyediting from ChatGPT was that it sounded like ChatGPT. I used GPTZero to get a number. Aaron Liu (talk) 14:18, 14 December 2024 (UTC)
- Does it really mean that? All you've demonstrated is that GPTZero has false positives, which is exactly why its use here was discouraged. jlwoodwa (talk) 05:26, 14 December 2024 (UTC)
- Yes, but being as far as 100% means people will also probably think the rewrite ChatGPT-generated. Aaron Liu (talk) 01:18, 14 December 2024 (UTC)
- I think there is consensus that GPTZero is not usable, even for obvious cases. JoelleJay (talk) 00:55, 14 December 2024 (UTC)
- Many, possibly most, online translators use artificial intelligence based on LLMs these days. Thryduulf (talk) 22:46, 12 December 2024 (UTC)
- On one hand, AI slop is a plague on humanity and obvious LLM output should definitely be disregarded when evaluating consensus. On the other hand, I feel like existing policy covers this just fine, and any experienced closer will lend greater weight to actual policy-based arguments, and discount anything that is just parroting jargon. WindTempos they 23:21, 12 December 2024 (UTC)
- Support in principle, but we cannot rely on any specific tools because none are accurate enough for our needs. Whenever I see a blatant ChatGPT-generated !vote, I ignore it. They're invariably poorly reasoned and based on surface-level concepts rather than anything specific to the issue being discussed. If someone is using AI to create their arguments for them, it means they have no actual argument besides WP:ILIKEIT and are looking for arguments that support their desired result rather than coming up with a result based on the merits. Also, toasters do not get to have an opinion. The Wordsmith 05:17, 13 December 2024 (UTC)
- Oppose. For creating unnecessary drama. First of, the "detector" of the AI bot is not reliable, or at least the reliability of the tool itself is still questionable. If the tool to detect LLM itself is unreliable, how can one reliably point out which one is LLM and which one is not? We got multiple tools that claimed to be able to detect LLM as well. Which one should we trust? Should we be elevating one tool over the others? Have there been any research that showed that the "picked" tool is the most reliable? Second, not all LLMs are dangerous. We shouldn't treat LLM as a virus that will somehow take over the Internet or something. Some editors use LLM to smooth out their grammar and sentences and fix up errors, and there is nothing wrong with that. I understand that banning obvious LLM text per WP:DUCK are good, but totally banning them is plain wrong. ✠ SunDawn ✠ (contact) 22:56, 15 December 2024 (UTC)
- @SunDawn, the proposal is to permit editors to collapse/strike obvious LLM text, not to "ban LLM totally". If LLM use is imperceptible, like for tweaking grammar, it's not going to be affected. JoelleJay (talk) 20:17, 19 December 2024 (UTC)
- Support with some kind of caveat about not relying on faulty tools or presuming that something is LLM without evidence or admission, based on the following reasons:
- We have stricter rules around semi-automated editing (rollback, AutoWikiBrowser, etc.) and even stricter rules around fully automated bot editing. These cleanup edits are widely accepted as positive, but there is still the concern about an overwhelming amount of bad edits to wade through and/or fix. A form of that concern is relevant here. Someone could reply to every post in this discussion in just a minute or so without ever reading anything. That's inherently disruptive.
- Nobody who is voting "oppose" is using an LLM to cast that vote. The LLM comments have been left by those supporting to make a point about how problematic they are for discussions like this. I think this reflects, even among oppose voters, a developing community consensus that LLM comments will be disregarded.
- If the rule in practice is to disregard LLM comments, not writing that rule down does not stop it from being the rule, consensus, or a community norm. It just makes the rule less obvious and less clear.
- It's disrespectful for an editor to ask someone to spend their time reading a comment if they couldn't be bothered to spend any time writing it, and therefore a violation of the policy Misplaced Pages:Civility, "
treat your fellow editors as respected colleagues with whom you are working on an important project.
"
- Also, I don't read the proposal as a ban on machine translation in any way. Rjj (talk) 00:01, 18 December 2024 (UTC)
- @Rjjiii, above @Dilettante said their !vote was created by LLM. JoelleJay (talk) 20:14, 19 December 2024 (UTC)
- I am strongly opposed to banning or ignoring LLM-made talk page comments just because they are LLM-made. I'm not a big fan of LLMs at all; they are actually useful only for some certain things, very few of which are directly relevant to contributing to Misplaced Pages in English or in any other language. However, some of those things are useful for this, at least for some humans, and I don't want to see these humans being kicked out of the English Misplaced Pages. I already witnessed several cases in which people whose first language is not English tried writing talk page responses in the English Misplaced Pages, used an LLM to improve their writing style, and got their responses ignored only because they used an LLM. In all those cases, I had strong reasons to be certain that they were real humans, that they meant what they wrote, and that they did it all in good faith. Please don't say that anyone who wants to contribute to the English Wikipeida should, in the first place, know English well enough to write a coherent talk page comment without LLM assistance; occasionally, I kind of wish that it was like that myself, but then I recall that the world is more complicated and interesting than that. Uses of LLMs that help the English Misplaced Pages be more inclusive for good-faith people are good. Of course, defining what good faith means is complicated, but using an LLM is not, by itself, a sign of bad faith. --Amir E. Aharoni (talk) 04:52, 19 December 2024 (UTC)
- Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) CMD (talk) 07:44, 19 December 2024 (UTC)
- @CMD The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, unless it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. ypn^2 23:41, 19 December 2024 (UTC)
- The distinctions have already been covered above, and no we would not have to. There is an obvious difference between software intended to translate and software intended to generate novel text, and users are likely to continue to treat those differently. CMD (talk) 02:49, 20 December 2024 (UTC)
- @CMD The dividing line between translation software and LLM is already blurry and will soon disappear. It's also rare that translation software results in coherent talkpage text, unless it's relying on some (primitive) form of LLM. So if we're going to outlaw LLMs, we would need to outlaw any form of translation software, and possibly any text-to-speech software as well. ypn^2 23:41, 19 December 2024 (UTC)
- Those concerned about their English should use translation software rather than an llm. Both might alter the meaning to some extent, but only one will make things up. (It's also not a sure assumption that llm text is coherent talkpage text.) CMD (talk) 07:44, 19 December 2024 (UTC)
- Strong support. LLM-generated content has no place anywhere on the encyclopedia. Stifle (talk) 10:27, 19 December 2024 (UTC)
- Strong oppose to the proposal as written. Misplaced Pages already suffers from being stuck in a 2001 mindset and a refusal to move with the technological times. Anyone who remembers most Wikipedians' visceral reaction to FLOW and VisualEditor when they were first introduced will observe a striking similarity. Yes, those projects had serious problems, as do LLM-generated comments. But AI is the future, and this attitude of "Move slowly to avoid changing things" will ultimately lead Misplaced Pages the way of Encyclopædia Britannica. Our discussion needs to be how best to change, not how to avoid to change. ypn^2 23:54, 19 December 2024 (UTC)
- The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. Aaron Liu (talk) 01:31, 20 December 2024 (UTC)
- True. Then, as now, there were many valid objections. But IIRC, there was limited discussion of "Let's figure out a better way to improve", and lots of "Everything is fine; don't change anything, ever." That attitude concerns me. ypn^2 01:52, 20 December 2024 (UTC)
- The main objection to VE and a major objection to FLOW was the developers' insistence on transforming Wikitext to HTML for editing and then transforming that back to Wikitext. Aaron Liu (talk) 01:31, 20 December 2024 (UTC)
- Support. I'm not even slightly swayed by these "it'll be too hard to figure out" and "mistakes could be made" and "we can't be 100% certain" sorts of arguments. That's true of everything around here, and its why we have an admins-must-earn-a-boatload-of-community-trust system, and a system of review/appeal of decisions they (or of course non-admin closers) make, and a consensus-based decisionmaking system more broadly. JoelleJay has it exactly right:
having a policy that permits closers to discount apparently-LLM-generated contributions will discourage good-faith editors from using LLMs irresponsibly and perhaps motivate bad-faith editors to edit the raw output to appear more human, which would at least involve some degree of effort and engagement with their "own" arguments.
And as pointed out by some others, the "it'll hurt non-native-English speakers" nonsense is, well, nonsense; translation is a different and unrelated process (though LLMs can perform it to some extent), of remapping one's own material onto another language.I'm also not in any way convinved by the "people poor at writing and other cognitive tasks needs the LLM to help them here" angle, because WP:COMPETENCE is required. This is work (albeit volunteer work), it is WP:NOT a game, a social-media playground, a get-my-ideas-out-there soapbox, or a place to learn how to interact e-socially or pick up remedial writing skills, nor a venue for practicing one's argument techiques. It's an encyclopedia, being built by people who – to be productive contributors instead of a draining burden on the entire community – must have: solid reasoning habits, great judgement (especially in assessing reliability of claims and the sources making them), excellent writing skills of a higherly particularized sort, a high level of fluency in this specific language (in multiple registers), and a human-judgment ability to understand our thick web of policies, guidelines, procedures, and often unwritten norms, and how they all interact, in a specific contextual way that may vary greatly by context. None of these is optional. An LLM cannot do any of them adequately (not even write well; their material sticks out like a sore thumb, and after a while you can even tell which LLM produced the material by its habitual but dinstictive crappy approach to simulating human thought and language).
In short, if you need an LLM to give what you think is meaningful input into a decision-making process on Misplaced Pages (much less to generate mainspace content for the public), then you need to go find something else to do, something that fits your skills and abilities. Saying this so plainly will probably upset someone, but so it goes. I have a rep for "not suffering fools lightly" and "being annoying but correct"; I can live with that if it gets the right decisions made and the work advanced. — SMcCandlish ☏ ¢ 😼 05:49, 22 December 2024 (UTC)
- The problem with all that is that we already have a policy that allows the hatting or removal of comments that are actually problematic because of their content (which are the only ones that we should be removing) without regard for whether it was or was not written by LLM. Everything that actually should be removed can be removed already. Thryduulf (talk) 11:39, 22 December 2024 (UTC)
- People who have good reading skills, great judgement, and solid reasoning habits enough to find problems in existing articles don't necessarily have great interpersonal writing/communication skills or the confidence. Meanwhile, for all LLM is bad at, it is very good at diluting everything you say to become dry, dispassionate, and thus inoffensive. Aaron Liu (talk) 15:26, 22 December 2024 (UTC)
- ok, I agree with @SMcCandlish, so therefore my vote is Support. Sm8900 (talk) 12:41, 25 December 2024 (UTC)
- Support. Sure I have questions about detection, but I don't think it means we shouldn't have a policy that explicitly states that it should not be used (and can be ignored/hatted if it is). Judging solely based on content (and no wp:bludgeoning, etc.) is unsustainable IMO. It would mean taking every wall of text seriously until it's clear that the content is unhelpful, and LLMs are very good at churning out plausible-sounding bullshit. It wastes everyone's time. If cognitive impairments or ESL issues make it hard to contribute, try voice-to-text, old-school translation software, or some other aid. LLMs aren't really you.--MattMauler (talk) 11:27, 23 December 2024 (UTC)
- Comment. While I agree with the sentiment of the request, I am at a loss to see how we can identify LLM generated comments in a consistent manner that can scale. Yes, it might be easier to identify egregious copy paste of wall of text, but, anything other than that might be hard to detect. Our options are:
- Robust tooling to detect LLM generated text, with acceptably low levels of false positives. Somewhat similar to what Earwig does for Copyvios. But, someone needs to build it and host it on WMTools or at a similar location.
- Self certification by editors. Every edit / publish dialogbox should have a checkbox for "Is this text LLM generated" with y/n optionality.
- Editors playing a vigilante role in reading the text and making a personal call on other editors' text. Obviously this is least preferred.
- These are my starting views. Ktin (talk) 00:37, 24 December 2024 (UTC)
- A couple of days ago, an editor ran a new article through GPTzero, and was told that it was 99.8% likely to be AI-generated. I ran the same revision of the same article through the same tool the next day, and it told me that it was 98.3% likely to be human-written.
- Now we're left scratching our heads: Why the difference? Which answer is correct? WhatamIdoing (talk) 04:36, 26 December 2024 (UTC)
- Strong oppose as it's impossible to enforce. Also LLMs are a valid and useful accessibility tool. – Anne drew 05:18, 24 December 2024 (UTC)
- Bonus suggestion!: I'm curious what Wikipedians think about this so let's try this. Many of the comments here discuss the impracticality of determining whether a user's comments are AI generated (i.e. gptzero isn't perfect), and many give valid arguments for using LLMs (i.e. ESL). If an argument is suspected to be written by LLM, I propose that editors should examine the user. Take a look at their listed contributions, and if they seem to have a habit of using AI, open a discussion on their talk page. If the user has a habit of using AI and doesn't recognize the inherent problems and refuses to change, this can be brought to the administrators' noticeboard for potential blocks. If (and only if) the person is blocked for using AI, their comments can be ignored. Or just ask ChatGPT to summarize them for you lol guninvalid (talk) 06:12, 25 December 2024 (UTC)
- I think that the general idea here is good: As much as possible, treat each account individually, and handle LLM use as a behavioral problem when it's actually a practical problem (i.e., not when it seems to be accurate and seems to be appropriate). WhatamIdoing (talk) 04:38, 26 December 2024 (UTC)
- Strong support the removal of any obvious, low effort AI-generated post. I recently came across a user posting multiple such examples. When called out on it they blew up and posted a comment saying, amongst other things "HAHAHAHHAHAHAHAHAHAHHAHAHAHAHAHA. Yes, some of it might be. Because I don't have time to argue with, in my humble opinion, stupid PHOQUING people." and "YOUUUUUUUUUUUUUUUUUUUUUUUUUUUU are assuming bath faith in me."
- They were later blocked as a sock evading a global lock.
- Currently it is too easy for trolls to game WP:AGF and AI to waste people's time arguing with their bot-generated replies. Using AI to write your posts for you makes it difficult for others to assume good faith. I am ok with obvious exceptions like a non-native speaker using AI to help them articulate their point. Photos of Japan (talk) 21:29, 25 December 2024 (UTC)
- Support for any and all reasons above. I'd be in favor of entirely banning AI-written text on the platform in articlespace as well.
Alternate proposal
- The following discussion is closed. Please do not modify it. Subsequent comments should be made in a new section. A summary of the conclusions reached follows.
- Redundant proposal, confusingly worded, with no support, and not even any further discussion interest in 10 days. — SMcCandlish ☏ ¢ 😼 05:23, 22 December 2024 (UTC)
Whereas many editors, including me, have cited problems with accuracy in regards to existing tools such as ZeroGPT, I propose that remarks that are blatently generated by a LLM or similar automated system should be discounted/removed/collapsed/hidden. ThatIPEditor 10:00, 10 December 2024 (UTC)
- Oppose as completely unnecessary and far too prone to error per the above discussion. Any comment that is good (on topic, relevant, etc) should be considered by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is bad (off-topic, irrelevant, etc) should be ignored by the closer regardless of whether it was made with LLM-input of any sort or not. Any comment that is both bad and disruptive (e.g. by being excessively long, completely irrelevant, bludgeoning, etc) should be removed and/or hatted as appropriate, regardless of whether it was made with LLM-input of any sort. The good thing is that this is already policy so we don't need to call out LLMs specifically, and indeed doing so is likely to be disruptive in cases where human-written comments are misidentified as being LLM-written (which will happen, regardless of whether tools are used). Thryduulf (talk) 11:19, 10 December 2024 (UTC)
- I think this proposal is not really necessary. I support it, but that is because it is functionally identical to the one directly above it, which I also supported. This should probably be hatted. BugGhost 🦗👻 18:32, 10 December 2024 (UTC)
- What does blatantly generated mean? Does you mean only where the remark is signed with "I, Chatbot", or anything that appears to be LLM-style? I don't think there's much in between. ypn^2 19:21, 10 December 2024 (UTC)
- Procedural close per BugGhost. I'd hat this myself, but I don't think that'd be appropriate since it's only the two of us who have expressed that this proposal is basically an exact clone. Aaron Liu (talk) 03:00, 11 December 2024 (UTC)
Should first language be included in the infobox for historical figures?
Is there a guideline concerning this? "Infobox royalty" apparently has this parameter, but I haven't found a single article that actually uses it. Many articles don't mention the subject's spoken languages at all. In my view, somebody's first language (L1) is just a very basic and useful piece of information, especially for historical figures. This would be helpful in cases where the ruling elites spoke a completely different language from the rest of the country (e.g., High Medieval England or early Qing dynasty China). These things are not always obvious to readers who are unfamiliar with the topic. Including it would be a nice and easy way to demonstrate historical language shifts that otherwise might be overlooked. Perhaps it could also bring visibility to historical linguistic diversity and language groups that have since disappeared. Where there are multiple first languages, they could all be listed. And in cases where a person's first language remains unclear, it could simply be left out. Kalapulla123 (talk) 11:53, 8 December 2024 (UTC)
- I don't think I agree this is a good use of infobox space:
- incongruences between elite spoken languages and popular spoken languages can't be shown with a single parameter (the language spoken by the oppressed would have to be included as well)
- for many people this would be unverifiable (already mentioned in OP) and / or contentious (people living during a language transition)
- sometimes L2 skills will be more than adequate to communicate with subject population when called for
- in cases where the subject's L1 matches their polity's (i.e. most cases), the parameter would feel like unnecessary clutter
- prose description seems adequate
- I think this might be sufficiently important pretty much exclusively for writers where the language they wrote in is not the "obvious" one for their nationality. Johnbod (talk) 12:43, 9 December 2024 (UTC)
- It might also be important for politicians (and similar figures?) in countries where language is a politically-important subject, e.g. Belgium. Thryduulf (talk) 16:29, 9 December 2024 (UTC)
- This seems like a bad idea. Let's take a case where language spoken by a royal was very relevant: Charles V, Holy Roman Emperor. When he became King of Castile as a teenager, he only really spoke Flemish and didn't speak Castilian Spanish, and needless to say trusted the advisors he could actually talk with (i.e. Flemish / Dutch ones he brought with him). He also then immediately skipped out of Castile to go to proto-Germany to be elected Holy Roman Emperor. This ended up causing a rebellion (Revolt of the Comuneros) which was at least partially justified by Castilian nationalism, and partially by annoyed Castilian elites who wanted cushy government jobs. So language-of-royal was relevant. But... the Infobox is for the person as a whole. Charles came back to Castile and spent a stretch of 10 years there and eventually learned rather good Castilian and largely assuaged the elite, at least. He was king of Spain for forty years. So it would seem rather petty to harp on the fact his first language wasn't Castilian in the Infobox, when he certainly did speak it later and through most of his reign, even if not his first few years when he was still basically a kid. SnowFire (talk) 19:47, 9 December 2024 (UTC)
- See below on this. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- SnowFire's fascinating anecdote shows that this information is not appropriate for infoboxes but rather should be described in prose in the body of the article where the subtleties can be explained to the readers. Cullen328 (talk) 19:56, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- Could you (or anyone else in favor of the proposal) identify 5 biographies where this information is both useful to readers and clearly backed by reliable sources? signed, Rosguill 15:06, 11 December 2024 (UTC)
- It isn't suitable for this infobox not because of a lack of importance, but because stating a single first language would be misleading. There exists the very real possibility of cases where it is both important and simple. Thryduulf (talk) 00:02, 10 December 2024 (UTC)
- But it kind of is here? I picked this example as maybe one of the most obviously relevant cases. Most royals failing to speak the right language don't have this trait linked with a literal war in reliable sources! But if inclusion of this piece of information in an Infobox is still problematic in this case, how could it possibly be relevant in the 99.9% cases of lesser importance? The Infobox isn't for every single true fact. SnowFire (talk) 21:53, 9 December 2024 (UTC)
- No, it shows that it's not appropriate for that infobox, and therefore that it is not suitable for all infoboxes where it is plausibly relevant. It shows nothing about whether it is or is not appropriate for other infoboxes: the plural of anecdote is not data. Thryduulf (talk) 21:08, 9 December 2024 (UTC)
- Charles V claimed to have spoken Italian to women, French to men, Spanish to God, and German to his horse. Hawkeye7 (discuss) 21:35, 9 December 2024 (UTC)
- Sorry, this is just nonsense! Charles V was raised speaking French, which was the language of his aunt's court, although in the Dutch-speaking Mechelen. All his personal letters use French. He only began to be taught Dutch when he was 14, & may never have been much good at it (or Spanish or German). Contrary to the famous anecdote, which is rather late and dubious ("Spanish to God....German to my horse") he seems to have been a rather poor linguist, which was indeed awkward at times. Johnbod (talk) 00:39, 10 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- There's a difference between "simplicity" and just being wrong! You should try reading the sources, with which I have no issue. And his ministers were also either native Francophones, like Cardinal Granvelle and his father Nicolas Perrenot de Granvelle (both from Besançon, now in eastern France), or could speak it well; the Burgundian elite had been Francophone for a long time. The backwash from all this remains a somewhat sensitive issue in Belgium, even now. And Charles V was not "King of Spain" (a title he avoided using) for 40 years at all; only after his mother died in 1555 (a year before him) did he become unarguably King of Castile. Johnbod (talk) 14:26, 11 December 2024 (UTC)
- (This is a bit off-topic, but "nonsense" is too harsh. I'm familiar that he spoke "French" too, yes, although my understanding was that he did speak "Flemish", i.e. the local Dutch-inflected speech, too? And neither 1500-era French nor Dutch were exactly standardized, so I left it as "Flemish" above for simplicity. If his Dutch was worse than I thought, sure, doesn't really affect the point made, though, which was that his Castilian was non-existent at first. As far as his later understanding of Spanish, his capacity was clearly enough - at the very least I've seen sources say he made it work and it was enough to stave off further discontent from the nobility. Take it up with the authors of the sources, not me.). SnowFire (talk) 16:23, 10 December 2024 (UTC)
- It may not be appropriate for many articles, but it surely is for some. For example, when I told her that England had had kings whose first language was German, someone asked me the other day how many. It would be good to have a quick way of looking up the 18th century Georges to find out. Phil Bridger (talk) 21:20, 9 December 2024 (UTC)
- I think the problem is that people might make assumptions. I would check before saying that George I and George II spoke German as their first language and not French. Languages spoken is probably more useful than birth language, but the list might be incomplete. There is also competing information about George I, and he is an English King, so he has been better researched and documented compared to other historical figures.
- I agree that this is important when language is the basis of community identity, such as in Belgian. Tinynanorobots (talk) 10:38, 10 December 2024 (UTC)
- Ummmm… no. People I disagree with™️ use “infobox bloat” as a boogeyman in arguments about infoboxes. But this is infobox bloat. Even those celebrity/anime character things that tell you shoe size, pinky length and blood type wouldn’t include this. Dronebogus (talk) 18:16, 11 December 2024 (UTC)
- I don't think there needs to be any central policy on this. It could be relevant to include this information for someone, perhaps... maybe... However, infoboxes work best when they contain uncontroversial at-a-glance facts that don't need a bunch of nuance and context to understand. For the example of Charles V, maybe his first language is significant, but putting it in the infobox (where the accompanying story cannot fit) would be a confusing unexplained factoid. Like, maybe once upon a time there was a notable person whose life turned on the fact that they were left-handed. That could be a great bit of content for the main article, but putting handedness in the infobox would be odd. Barnards.tar.gz (talk) 14:33, 12 December 2024 (UTC)
- {{Infobox baseball biography}} includes handedness, and nobody finds that odd content for an infobox.
- {{infobox royalty}} includes the option for up to five native languages, though the OP says it seems to be unused in practice. {{Infobox writer}} has a
|language=
parameter, and it would be surprising if this were unused. WhatamIdoing (talk) 19:36, 12 December 2024 (UTC)- Baseball seems to be a good example of where handedness is routinely covered, and easily consumable at a glance without needing further explanation. The scenario where I don't think handedness (or first language) makes sense is when it is a uniquely interesting aspect of that individual's life, because almost by definition there's a story there which the infobox can't tell. Barnards.tar.gz (talk) 10:23, 13 December 2024 (UTC)
- I don't think L1 can be determined for most historical figures without a hefty dose of OR. If you look at my Babel boxes, you'll see that I, as a living human being with all the information about my own life, could not tell you what my own "L1" is. The historical figures for whom this would be relevant mostly spoke many more languages than I do, and without a time machine it would be nigh impossible to say which language they learned first. This isn't even clear for the Qing emperors – I am fairly certain that they all spoke (Mandarin) Chinese very well, and our article never says what language they spoke. Puyi even states that he never spoke Manchu. Adding this parameter would also inflame existing debates across the encyclopedia about ethnonationalism (e.g. Nicola Tesla) and infobox bloat. Toadspike 21:21, 12 December 2024 (UTC)
- As with every bit of information in every infobox, if it cannot be reliably sourced it does not go in, regardless of how important it is or isn't. There are plenty of examples of people whose first language is reported in reliable sources, I just did an internal source for "first language was" and on the first page of results found sourced mentions of first language at Danny Driver, Cleopatra, Ruthanne Lum McCunn, Nina Fedoroff, Jason Derulo, Henry Taube and Tom Segev, and an unsourced but plausible mention at Dean Martin. The article strongly suggests that her first language is an important part of Cleopatra's biography such that putting it in the infobox would be justifiable. I am not familiar enough with any of the others to have an opinion on whether it merits an infobox mention there, I'm simply reporting that there are many articles where first language is reliably sourced and a mention is deemed DUE. Thryduulf (talk) 22:08, 12 December 2024 (UTC)
- I have been wondering since this conversation opened how far back the concept of an L1 language, or perhaps the most colloquial first language, can be pushed. Our article doesn't have anything on the history of the concept. CMD (talk) 11:31, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)
- The question is not whether it is appropriate in any cases; the question is whether it is worth the trouble. I guarantee that this would lead to many vicious debates, despite being in most cases an irrelevant and unverifiable factoid based on inappropriate ABOUTSELF. This is the same reason we have MOS:ETHNICITY/NATIONALITY. Toadspike 07:29, 16 December 2024 (UTC)
- It's certainly not going to be appropriate for the infobox of every historical person, as is true for (nearly?) every parameter. The questions here are whether it is appropriate in any cases, and if so in enough cases to justify having it as a parameter (how many is enough? I'd say a few dozen at minimum, ideally more). I think the answer the first question is "yes". The second question hasn't been answered yet, and I don't think we have enough information here yet to answer it. Thryduulf (talk) 21:54, 13 December 2024 (UTC)
- The concept of different social strata speaking different languages is old, but I'm not sure whether they viewed learning languages the same way we do. It's certainly possible, and perhaps it happened in some areas at some times, but I hesitate to assume it's the case for every historical person with an infobox. CMD (talk) 16:05, 13 December 2024 (UTC)
- I suspect the concept is pretty ancient, I certainly wouldn't be surprised to learn it arose around the same time as diplomacy between groups of people with different first languages. The note about it at Cleopatra certainly suggests it was already a well-established concept in her era (1st century BCE). Thryduulf (talk) 13:23, 13 December 2024 (UTC)
- Nah. If this were "a very basic and useful piece of information" then we would already be deploying it site wide, so it obviously is not. In the vast majority of cases, it would involve intolerable WP:OR or even just guessing masquerading as facts. We do not know for certain that someone born in France had French as their first/native/home language. I have close relatives in the US, in a largely English-speaking part of the US, whose first language is Spanish. For historical figures it would get even more ridiculous, since even our conceptions of languages today as, e.g., "German" and "French" and "Spanish" and "Japanese", is a bit fictive and is certainly not historically accurate, because multiple languages were (and still are, actually) spoken in these places. We would have no way to ascertain which was used originally or most natively for the average historical figure. Beyond a certain comparatively recent point, most linguistics is reconstruction (i.e. educated guesswork; if there's not a substantial corpus of surviving written material we cannot be sure. That matters a lot for figures like Genghis Khan and King Bridei I of the Picts. Finally, it really is just trivia in the vast majority of cases. What a biographical figure's first/primary/home/most-fluent/most-frequently-used language (and some of those might not be the same since all of them can change over time other than "first") is something that could be included when certain from RS, but it's not lead- or infobox-worthy in most cases, unless it pertains directly the subject's notability (e.g. as a writer) and also isn't already implicit from other details like nationality. — SMcCandlish ☏ ¢ 😼 03:42, 23 December 2024 (UTC)
Restrict new users from crosswiki uploading files to Commons
I created this Phabricator ticket (phab:T370598) in July of this year, figuring that consensus to restrict non-confirmed users from crosswiki uploading files to Commons is implied. Well, consensus already agreed at Commons in response to the WMF study on crosswiki uploading. I created an attempted Wish at Meta-wiki, which was then rejected, i.e. "archived", as policy-related and requir alignment across various wikis to implement such a policy
. Now I'm starting this thread, thinking that the consensus here would already or implicitly support such restriction, but I can stand corrected about the outcome here. George Ho (talk) 06:34, 9 December 2024 (UTC); corrected, 08:10, 9 December 2024 (UTC)
- Support. I am not sure why this relies on alignment across wikis, those on Commons are best placed to know what is making it to Commons. The change would have little to no impact on en.wiki. If there is an impact, it would presumably be less cleaning up of presumably fair use files migrated to Commons that need to be fixed here. That said, if there needs to be consensus, then obviously support. We shouldn't need months of bureaucracy for this. CMD (talk) 06:41, 9 December 2024 (UTC)
- Support, I don't know that my input really counts as new consensus because I said this at the time, but the problem is much worse than what the study suggests as we are still finding spam, copyvios, unusable selfies and other speedy-deletable uploads from the timespan audited.
- Gnomingstuff (talk) 02:14, 10 December 2024 (UTC)
- Support As this applies to images being posted to Commons, but by a method that side steps their wishes, I don't see why another wiki should stand in the way. -- LCU ActivelyDisinterested «@» °∆t° 16:54, 10 December 2024 (UTC)
- Support. I do think that disabling the ability for new editors on the English Misplaced Pages from engaging in crosswiki uploads to Commons would be a net positive; the Commons community has come to this conclusion several times, and the research confirms that cross-wiki uploads by new users cause more trouble than the good uploads worth. — Red-tailed hawk (nest) 00:36, 11 December 2024 (UTC)
- Support Way too low signal-to-noise ratio; most of these images are copyvios or otherwise useless. -- King of ♥ ♦ ♣ ♠ 01:12, 11 December 2024 (UTC)
- Support like the above editors. Much spam, many copyvios, few good images.—Alalch E. 15:47, 11 December 2024 (UTC)
- I don't think this should be any sort of enwiki policy. If commonswiki wants to restrict something that should be up to them. I can't possibly see how it would need to be specific to the English Misplaced Pages (i.e. but not about new users on dewiki, eswikt, etc). — xaosflux 16:19, 11 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Misplaced Pages to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- This comment interests me: "Interestingly, we found that most uploaders were either marketers (editing/uploading on behalf of another entity such as their employer), or they were self-promoters (creating pages about themselves, unaware of the "notability" requirement)."
- So I wonder whether, instead of stopping this, we want a bot to look at newbies who create articles/drafts, check whether they uploaded something, and then tag both the image(s) and the pages here with a note that says something like "There is a 90% chance that this has been posted by a marketer or self-promoter", with suitable links to pages such as Misplaced Pages:Paid-contribution disclosure. Or maybe even a WP:STICKYPROD process.
- On the question of what to do, it should be possible to hide the cross-wiki upload button. The real question is, do we replace it with a link to c:Special:UploadWizard? The Commons POV has been that it's bad for people to upload images within the visual editor, but okay for the same person to upload the same image with the UploadWizard. I'm not sure the net result is actually any different, especially for these marketers/self-promoters (in terms of net quality/acceptability; from Commons' POV, it's better because (a lot? a little?) fewer of them will click through to upload anything at Commons). WhatamIdoing (talk) 19:49, 12 December 2024 (UTC)
- As noted by George Ho above, Commons has already done this for all wikis. The question is whether or not we want the English Misplaced Pages to assist in implementing this (perhaps by changing a local setting or software configuration to require that their uploads be local), rather than merely relying upon a Commons edit filter (which can be a bit unfriendly to new users). — Red-tailed hawk (nest) 19:50, 11 December 2024 (UTC)
- Support Nearly every single thing I've ever put up for deletion at Commons has been stuff uploaded to spam en.wp. It never stops. Just Step Sideways 19:55, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux 20:52, 11 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- It is for sure still happening, I've nominated a few in just the past week. Just Step Sideways 22:26, 11 December 2024 (UTC)
- It's still happening. A lot of them go to the uncategorized backlog which has well over 100,000 things in it so they get overlooked. Gnomingstuff (talk) 19:18, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Misplaced Pages:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- I got another one just now . This really can't happen fast enough. Just Step Sideways 23:51, 12 December 2024 (UTC)
- If anyone wants to help with that, then click on c:Special:RandomInCategory/Category:All media needing categories as of 2018. Figure out what the image is (Google Lens or TinEye searches can help; go to c:Special:Preferences#mw-prefsection-gadgets and ⌘F for TinEye to find the right item). If you can identify it, then add a relevant cat. I believe that Misplaced Pages:HotCat is enabled by default for all logged-in editors, so searching for cats is usually pretty easy. If you can't find something obviously relevant, then skip it and try another. WhatamIdoing (talk) 20:02, 12 December 2024 (UTC)
- Yes, it's still happening. Such uploads include these images from EnWiki; the edit filter, as currently implemented, only filters out images with certain characteristics. — Red-tailed hawk (nest) 21:05, 11 December 2024 (UTC)
- Is this still happening? According to @Red-tailed hawk this is already blocked. — xaosflux 20:52, 11 December 2024 (UTC)
- Support It's honestly kinda dumb that we have to have this whole other consensus process after the prior one just because people at Meta-wiki don't want to implement it. Silverseren 20:35, 13 December 2024 (UTC)
- Support, since this has already been decided and WMF is just being recalictrant. WP:NOT#BUREAUCRACY, and Commons isn't one either, nor is Wikitionary, etc., and to the extent WMF wants to be one that needs to be nipped in the bud. — SMcCandlish ☏ ¢ 😼 03:44, 23 December 2024 (UTC)
Question(s) stemming from undiscussed move
"AIM-174 air-to-air missile" was moved without discussion to "AIM-174B." Consensus was reached RE: the removal of "air-to-air missile," but no consensus was reached regarding the addition or removal of the "B." After a no-consensus RM close (which should have brought us back to the original title, sans agreed-upon unneeded additional disambiguator, in my opinion), I requested the discussion be re-opened, per pre-MRV policy. (TO BE CLEAR; I should have, at this time, requested immediate reversion. However, I did not want to be impolite or pushy) The original closer -- Asukite (who found for "no consensus") was concerned they had become "too involved" in the process and requested another closer. Said closer immediately found consensus for "AIM-174B." I pressed-on to a MRV, where an additional "no consensus" (to overturn) finding was issued. As Bobby Cohn pointed-out during the move review, "I take issue with the participating mover's interpretation of policy 'Unfortunately for you, a no consensus decision will result in this article staying here' in the RM, and would instead endorse your idea that aligns with policy, that a no consensus would take us back the original title, sans extra disambiguatotr."
The issues, as I see them, are as-follows:
WP:RMUM: The move from “AIM-174 air-to-air missile” to “AIM-174B” was conducted without discussion, and I maintain all post-move discussions have achieved "no consensus."
Burden of Proof: The onus should be on the mover of the undiscussed title to justify their change, not on others to defend the original title. I refrained from reverting prior to initiating the RM process out of politeness, which should not shift the burden of proof onto me.
Precedent: I am concerned with the precedent. Undiscussed moves may be brute-forced into acceptance even if "no consensus" or a very slim consensus (WP:NOTAVOTE) is found?
Argument in-favor of "AIM-174:" See the aforementioned RM for arguments in-favor and against. However, I would like to make it clear that I was the only person arguing WP. Those in-favor of "174B" were seemingly disagreeing with my WP arguments, but not offering their own in-support of the inclusion of "B." That said, my primary WP-based argument is likely WP:CONSISTENT; ALL U.S. air-to-air-missiles use the base model as their article title. See: AIM-4 Falcon, AIM-26 Falcon, AIM-47 Falcon, AIM-9 Sidewinder, AIM-7 Sparrow, AIM-54 Phoenix, AIM-68 Big Q, AIM-82, AIM-95 Agile, AIM-97 Seekbat, AIM-120 AMRAAM, AIM-132, AIM-152 AAAM, AIM-260. 174"B" is unnecessary while violating consistency.
Do my policy contentions hold any weight? Or am I mad? Do I have any path forward, here?
TO BE CLEAR, I am not alleging bad faith on behalf of anyone, and I am extremely grateful to all those who have been involved, particularly the RM closer that I mentioned, as well as the MRV closer, ModernDayTrilobite. I would like to make it clear that this isn't simply a case of a MRV 'not going my way.' Again, I am concerned w/ the precedent and with the onus having been shifted to me for months. I also apologize for the delay in getting this here; I originally stopped-over at the DRN but Robert McClenon kindly suggested I instead post here.MWFwiki (talk) 00:08, 12 December 2024 (UTC)
- Are you familiar with Misplaced Pages:Article titles#Considering changes? Do you think you understand why that rule exists? WhatamIdoing (talk) 23:31, 12 December 2024 (UTC)
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- If yours is the title favored by the policy, then none of this explanation makes any difference. You just demand that it be put back to the title favored by the policy, and editors will usually go along with it. (It sometimes requires spelling out the policy in detail, but ultimately, most people want to comply with the policy.)
- If yours is not the title favored by the policy, then the people on the other 'side' are going to stand on policy when you ask to move it, so you'd probably have to get the policy changed to 'win'. If you want to pursue that, you will need to understand why the rule is set this way, so that you have a chance of making a convincing argument. WhatamIdoing (talk) 05:24, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
(The only reason I listed any further ‘litigation’ here is to show what was being discussed in-general for convenience’s sake, not necessarily to re-litigate)
However, at least two individuals involved have expressed to me that they felt their hands were tied by the RM/MRV process. Otherwise, as I mentioned (well, as Bobby_Cohn mentioned) the train of thought seemed to be “well, I don’t want the title to be changed,” and this was seemingly enough to override policy. Or, at best, it was seemingly a “well, it would be easier to just leave it as-is” sort of decision. - And again, I, 100%, should have been more forceful; The title anhould have been reverted per the initial “no consensus” RM-closure and I will certainly bear your advice in-mind in the future. That said, I suppose what I am asking is would it be inappropriate to ask the original RM-closer to revert the article at this point, given how much time is past?
MWFwiki (talk) 06:29, 13 December 2024 (UTC)- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Misplaced Pages:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- Thanks! MWFwiki (talk) 05:09, 14 December 2024 (UTC)
- Given what was written in Talk:AIM-174B#Requested move 20 September 2024 six weeks ago, I think that none of this is relevant. "Consensus to keep current name" does not mean that you get to invoke rules about what happens when there is no consensus. I suggest that you give up for now, wait a long time (a year? There is no set time, but it needs to be a l-o-n-g time), and maybe start a new Misplaced Pages:Requested moves (e.g., in 2026). WhatamIdoing (talk) 19:41, 13 December 2024 (UTC)
- I think several individuals involved in this process have agreed that the default title is the favored title, at least as far as WP:TITLECHANGES, as you say.
- I am quite familiar with it. It seemingly supports my argument(s), so...? Is there a particular reason you're speaking in quasi-riddles? MWFwiki (talk) 01:11, 13 December 2024 (UTC)
- Everything ModernDayTrilobite advised you of is correct. Vpab15 closed the RM and determined that consensus was reached. Nothing since then has overturned or otherwise superseded Vpab15's closure. Therefore that closure remains in force. You already challenged the validity of Vpab15's closure at move review, and you have no avenue for challenging it again. Your best bet is to wait a tactful amount of time (several months) before starting another RM. And in that RM, none of this procedural stuff will matter, and you will be free to focus just on making the clearest, simplest case for why AIM-174 is the best title. Adumbrativus (talk) 06:10, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I don't disagree with your latter statement, but why would an initial finding of no-consensus not matter? It should have brought us back to the default title, not simply been reverted. Because that policy wasn't followed, I'm here now, is my point. Regardless, I understand; Thank you for your advice! Well, I appreciate your time and consideration! :-) MWFwiki (talk) 05:08, 14 December 2024 (UTC)
- You've repeatedly pointed to the initial self-reverted closure as if it's somehow significant. It isn't. Asukite voluntarily decided to close the discussion, and voluntarily self-reverted their decision to close. It doesn't matter whether you asked for it or someone else asked or no one asked. They had the right to self-revert then, for any reason or no reason. The net result is the same as if Asukite had never closed it at all. Only Vpab15's closure, which was 100% on Vpab15's own authority and 0% on the supposed authority of the annulled earlier closure, is binding. Adumbrativus (talk) 09:22, 13 December 2024 (UTC)
- I suppose my issue is better summed-up by my above discussion with WhatamIdoing; The MRV shouldn’t have been required. That burden should never have been on me. The title should have been reverted at the initial “no consensus” per WP:TITLECHANGES. Otherwise, undiscussed moves — when challenged — may now be upheld by either consensus or no consensus? This is not what WP:TITLECHANGES says, obviously. That said I take full responsibility for not being clearer with this argument, and instead focusing on arguing for a ‘different’ title, when I should have been arguing for the default title per TITLECHANGES. MWFwiki (talk) 06:33, 13 December 2024 (UTC)
- (Involved at the MRV) Seeing as I've been tagged in this multiple times and quoted, I'll give my thoughts on this. I don't want to accuse MWFwiki of selectively quoting me but I do think that my quote above was, when taken into account with the following discussion, more about meta-conversation about the correct policy to implement in the event the MRV went the other way. I explicitly said in the immediately following message
the view that the close was not outside the scope of WP:RMCI is reasonable and good faith interpretation.
I do think this close was within bounds, and the following MRV appropriately closed and summarised. - Yes, had the original close of no consensus stood, then it could have been reverted wholecloth. It was self-reverted and therefore plays no role in the consideration of the subsequent closure. We're always going to take the most recent finding of consensus to be what holds. It seems to have been said in the above that had the no consensus closure held and the appropriate WP:RMNCREV policy been applied, then the appellant here would have gotten their preferred outcome. But to continue to argue this in the face of the subsequent developments is where this enters wikilawyering territory. I think that since then, the appellant has continued to make policy arguments that would be better suited for a subsequent and focused RM on the actual title rather than wikilawyer about a previous close that was self-reverted and continuing to argue policy.
- There's nothing for this venue to really change in regards to that AT and the discussion to change the AT would need to be had at the articles talk page. My sincere advice to appellant is to wait a reasonable amount of time and make strong policy based arguments about the preferred title (don't just quote policy, we editors are good at clicking links and reading it for ourselves—quoting nothing but policy back at us makes us feel like you've taken us for fools; instead provide facts and sources that support the relevant policies and link those). Spend some time at WP:RMC and see what well-argued and successful RMs typically look like. Bobby Cohn (talk) 17:38, 17 December 2024 (UTC)
CSD A12. Substantially written using a large language model, with hallucinated information or fictitious references
WITHDRAWN Per the solution to the problem already being covered by WP:DRAFTREASON, something I was unaware of when I made the proposal. Svampesky (talk) 15:49, 14 December 2024 (UTC)The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
When fixing up new articles, I have encountered articles that appear to have been substantially generated by AI, containing hallucinated information. While these articles may not meet other criteria for speedy deletion, as the subjects themselves are sometimes real and notable, waiting for seven days to PROD the articles is inefficient. I recommend designating WP:A12 for the speedy deletion of these articles. I have created a template (User:Svampesky/Template:Db-a12) if it is successful. A recent example is the article on the Boston University Investment Office, where the author explicitly disclosed that it was created using a large language model and contains references to sources don't exist. I initially G11'd it, as it seemed the most appropriate, but was declined, and the article was subsequently PRODed. Svampesky (talk) 21:13, 12 December 2024 (UTC)
- CSD are generally limited to things that are unambiguously obvious. I image the number of cases in which it's unabiguously obvious that the entire page was generated by an LLM (as opposed to the editor jut using the LLM to generate references, for example) are small enough that it doesn't warrant a speedy deletion criterion. --Ahecht (TALK
PAGE) 21:29, 12 December 2024 (UTC)- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- I don't think it even merits a policy page. The number of cases where the LLM use is objectively unambiguous, and the article content sufficiently problematic that deletion is the only appropriate course of action and it cannot be (speedily) deleted under existing policy is going to be vanishingly small. Even the OP's examples were handled by existing processes (PROD) sufficiently. Thryduulf (talk) 22:11, 12 December 2024 (UTC)
- I like this idea but agree that it's better not as a CSD but perhaps its own policy page. Andre🚐 21:33, 12 December 2024 (UTC)
- @Svampesky, when you say that Misplaced Pages:Proposed deletion is "inefficient", do you mean that you don't want to wait a week before the article gets deleted? WhatamIdoing (talk) 23:32, 12 December 2024 (UTC)
- My view is that Misplaced Pages:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- Efficiency usually compares the amount of effort something takes, not the length of time it takes. "Paint it and leave it alone for 10 minutes to dry" is the same amount of hands-on work as "Paint it and leave it alone for 10 days to dry", so they're equally efficient processes. It sounds like you want a process that isn't less hands-on work/more efficient, but instead a process that is faster.
- Also, if the subject qualifies for an article, then deletion isn't necessarily the right solution. Blanking bad content and bad sources is officially preferred (though more work) so that there is only verifiable content with one or more real sources left on the page – even if that content is only a single sentence.
- Efficiency and speed is something that many editors like. However, there has to be a balance. We're WP:HERE to build an encyclopedia, which sometimes means that rapidly removing imperfect content is only the second or third most important thing we do. WhatamIdoing (talk) 00:43, 13 December 2024 (UTC)
- My view is that Misplaced Pages:Proposed deletion inefficient for articles that clearly contain hallucinated LLM-generated content and fictitious references (which almost certainly will be deleted) in the mainspace for longer than necessary. Svampesky (talk) 00:03, 13 December 2024 (UTC)
- This part
as the subjects themselves are sometimes real and notable
is literally an inherent argument against using CSD (or PROD for that matter). WP:TNT the article to a sentence if necessary, but admitting that you're trying to delete an article you know is notable just means you're admitting to vandalism. Silverseren 00:07, 13 December 2024 (UTC)- The categorization of my proposal as
admitting to vandalism
is incorrect. WP:G11, the speedy deletion criterion I initially used for the article, specifies deleting articles thatwould need to be fundamentally rewritten to serve as encyclopedia articles
. Articles that have been generated using large language models, with hallucinated information or fictitious references, would need to be fundamentally rewritten to serve as encyclopedia articles. Svampesky (talk) 00:42, 13 December 2024 (UTC)- Yes, but G11 is looking for blatant advertising ("Buy widgets now at www.widgets.com! Blue-green widgets in stock today!") It's not looking for anything and everything that needs to be fundamentally re-written. WhatamIdoing (talk) 00:45, 13 December 2024 (UTC)
- (Edit Conflict) How does G11 even apply here? Being written via LLM does not make an article "promotional". Furthermore, even that CSD criteria states
If a subject is notable and the content could plausibly be replaced with text written from a neutral point of view, this is preferable to deletion.
I.e. TNT it to a single sentence and problem solved. Silverseren 00:46, 13 December 2024 (UTC)
- The categorization of my proposal as
- The venue for proposing new criteria is at Misplaced Pages talk:Criteria for speedy deletion. So please make sure that you don't just edit in a new criterion without an RFC approving it, else it will be quickly reverted. Graeme Bartlett (talk) 00:20, 13 December 2024 (UTC)
- Since we are talking about BLPs… the harm of hallucinated information does need to be taken very seriously. I would say the first step is to stubbify.
- However, Deletion can be held off as a potential second step, pending a proper BEFORE check. Blueboar (talk) 01:06, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- I have deleted an AI generated article with fake content and references as a hoax. So that may well be possible. Graeme Bartlett (talk) 12:23, 13 December 2024 (UTC)
- If the hallucination is sufficiently dramatic ("Joe Film is a superhero action figure", when it ought to say that he's an actor who once had a part in a superhero movie), then you might be able to make a good case for {{db-hoax}}. WhatamIdoing (talk) 05:26, 13 December 2024 (UTC)
- Isn't this covered by WP:DRAFTREASON? Gnomingstuff (talk) 20:34, 13 December 2024 (UTC)
AFD clarification
The Articles for deletion article states that:
If a redirection is controversial, however, AfD may be an appropriate venue for discussing the change in addition to the article's talk page.
Does this mean that an AFD can be started by someone with the intent of redirecting instead of deleting? Plasticwonder (talk) 04:06, 13 December 2024 (UTC)
- Yes. If there is a contested redirect, the article is restored and it is brought to AfD. voorts (talk/contributions) 04:34, 13 December 2024 (UTC)
- I think the ideal process is:
- Have an ordinary discussion on the talk page about redirecting the page.
- If (and only if) that discussion fails to reach consensus, try again at AFD.
- I dislike starting with AFD. It isn't usually necessary, and it sometimes has a feel of the nom trying to get rid of it through any means possible ("I'll suggest a WP:BLAR, but maybe I'll be lucky and they'll delete it completely"). WhatamIdoing (talk) 05:31, 13 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- It will also depend on whether you advertise the discussion, e.g., at an active WikiProject. WhatamIdoing (talk) 19:44, 13 December 2024 (UTC)
- It depends how active the respective talk pages are (redirected article and target), but certainly for ones that are quiet AfD is going to be the most common. Thryduulf (talk) 09:33, 13 December 2024 (UTC)
- I agree with that. From what I have seen at least, if redirecting is contested, it then is usually discussed at AFD, but that's just me. Plasticwonder (talk) 08:42, 13 December 2024 (UTC)
- I usually just go straight to AfD. I've found that editors contesting redirects usually !vote keep and discussing on talk just prolongs the inevitable AfD. voorts (talk/contributions) 14:58, 13 December 2024 (UTC)
- Gotcha. Plasticwonder (talk) 15:29, 13 December 2024 (UTC)
- Looking at the above comments: What is it about the Misplaced Pages:Proposed article mergers process that isn't working for you all? If you redirect an article and it gets reverted, why aren't you starting a PM? WhatamIdoing (talk) 21:37, 16 December 2024 (UTC)
- For me, it's lack of participation, no tool to list something at PAM, and no relisting option so proposed merges just sit for a very long time before being closed. voorts (talk/contributions) 23:21, 16 December 2024 (UTC)
- What voorts said. Multiple times now I've floated the idea of making PAM more like RM, one of these years I should really get around to doing something more than that. I won't have time before the new year though. Thryduulf (talk) 23:45, 16 December 2024 (UTC)
- I think PAM should be merged into AfD, since both generally involve discussions of notability. voorts (talk/contributions) 00:00, 17 December 2024 (UTC)
- Merging often involves questions of overlap and topical distinction rather than just notability, although this also ends up discussed at AfD. I do wonder if this would leave proposals to split out in the cold though, as much like merge discussions they just sit there. CMD (talk) 04:00, 17 December 2024 (UTC)
- I think PAM should be merged into AfD, since both generally involve discussions of notability. voorts (talk/contributions) 00:00, 17 December 2024 (UTC)
- The most important tool is Twinkle > Tag > Merge. I personally prefer its "Merge to" option, but there's a plain "Merge" if you don't know exactly which page should be the target.
- All merges get bot-listed in Misplaced Pages:Article alerts. Misplaced Pages:Proposed article mergers is another place to advertise it, and I'd bet that Twinkle could post those automatically with relatively little work (an optional button, similar to notifying the creator of deletion plans).
- I dislike "relisting"; things should just stay open as long as they need to, without adding decorative comments about the discussion not happening fast enough. In my experience, merge proposals stay open because everyone's agreed on the outcome but nobody wants to do the work. WhatamIdoing (talk) 06:46, 17 December 2024 (UTC)
- What voorts said. Multiple times now I've floated the idea of making PAM more like RM, one of these years I should really get around to doing something more than that. I won't have time before the new year though. Thryduulf (talk) 23:45, 16 December 2024 (UTC)
- For me, it's lack of participation, no tool to list something at PAM, and no relisting option so proposed merges just sit for a very long time before being closed. voorts (talk/contributions) 23:21, 16 December 2024 (UTC)
- Would need some stats on the it isn't usually necessary claim, my intuition based on experience is that if a BLAR is contested it's either dropped or ends up at AfD. CMD (talk) 05:48, 13 December 2024 (UTC)
- I think the ideal process is:
- In this context isn't redirection a *type* of deletion (specifically delete while leaving a redirect)? Horse Eye's Back (talk) 07:05, 17 December 2024 (UTC)
- I would think so. Plasticwonder (talk) 07:33, 17 December 2024 (UTC)
- It's only a deletion if an admin pushes the delete button. Blanking and redirecting – even blanking, redirecting, and full-protecting the redirect so nobody can un-redirect it – is not deletion. WhatamIdoing (talk) 07:34, 18 December 2024 (UTC)
- That might be clear to you (and the other admins) but almost nobody in the general community understands that (to the point where I would say its just wrong, deletion is broader than that in practice). Horse Eye's Back (talk) 16:23, 18 December 2024 (UTC)
- Well, it has always been clear to me, and I am not, and have never wished to be, an admin. But, then again, I am a bit strange in that I expect things to be as people say that they will be. Phil Bridger (talk) 18:34, 18 December 2024 (UTC)
- Contested redirects going to AfD makes sense. Articles are redirected for the same reasons they're deleted and redirecting is probably the most common ATD. I've opened plenty of AfDs where my nom recommends a redirect instead of deletion, including when I've BLARed an article and had the BLAR reverted. voorts (talk/contributions) 18:38, 18 December 2024 (UTC)
- If a redirect has already been discussed or attempted, and consensus can't be reached easily, then I've got no problem with AFD. What I don't want to see is no discussion, no bold redirects, nobody's even hinted about a merge, and now it's at AFD, when the problem could have been resolved through a less intense method. WhatamIdoing (talk) 19:07, 18 December 2024 (UTC)
- That might be clear to you (and the other admins) but almost nobody in the general community understands that (to the point where I would say its just wrong, deletion is broader than that in practice). Horse Eye's Back (talk) 16:23, 18 December 2024 (UTC)
- It's only a deletion if an admin pushes the delete button. Blanking and redirecting – even blanking, redirecting, and full-protecting the redirect so nobody can un-redirect it – is not deletion. WhatamIdoing (talk) 07:34, 18 December 2024 (UTC)
- I would think so. Plasticwonder (talk) 07:33, 17 December 2024 (UTC)
RfC: Voluntary RfA after resignation
|
Should Misplaced Pages:Administrators#Restoration of admin tools be amended to:
- Option 1 – Require former administrators to request restoration of their tools at the bureaucrats' noticeboard (BN) if they are eligible to do so (i.e., they do not fit into any of the exceptions).
- Option 2 –
ClarifyMaintain the status quo that former administrators who would be eligible to request restoration via BN may instead request restoration of their tools via a voluntary request for adminship (RfA). - Option 3 – Allow bureaucrats to SNOW-close RfAs as successful if (a) 48 hours have passed, (b) the editor has right of resysop, and (c) a SNOW close is warranted.
Background: This issue arose in one recent RfA and is currently being discussed in an ongoing RfA. voorts (talk/contributions) 21:14, 15 December 2024 (UTC)
Note: There is an ongoing related discussion at Misplaced Pages:Village pump (idea lab) § Making voluntary "reconfirmation" RFA's less controversial.
Note: Option 2 was modified around 22:08, 15 December 2024 (UTC).
Note: Added option 3. theleekycauldron (talk • she/her) 22:12, 15 December 2024 (UTC)
- Notified: Misplaced Pages:Administrators' noticeboard, Misplaced Pages:Bureaucrats' noticeboard, Misplaced Pages talk:Administrators, Misplaced Pages talk:Requests for adminship, T:CENT. voorts (talk/contributions) 21:19, 15 December 2024 (UTC)
- 2 per Kline's comment at Hog Farm's RfA. If an admin wishes to be held accountable for their actions at a re-RfA, they should be allowed to do so. charlotte 21:22, 15 December 2024 (UTC)
- Also fine with 3 charlotte 22:23, 15 December 2024 (UTC)
- There is ongoing discussion about this at Misplaced Pages:Village pump (idea lab)#Making voluntary "reconfirmation" RFA's less controversial. CMD (talk) 21:24, 15 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)"
- I like this idea, if option 2 comes out as consensus I think this small change would be a step in the right direction, as the "this isn't the best use of time" crowd (myself included) would be able to quickly identify the type of RFAs they don't want to participate in. BugGhost 🦗👻 11:05, 17 December 2024 (UTC)- I think that's a great idea. I would support adding some text encouraging people who are considering seeking reconfirmation to add (RRfA) or (reconfirmation) after their username in the RfA page title. That way people who are averse to reading or participating in reconfirmations can easily avoid them, and no one is confused about what is going on. 28bytes (talk) 14:23, 17 December 2024 (UTC)
- I think this would be a great idea if it differentiated against recall RfAs. Aaron Liu (talk) 18:37, 17 December 2024 (UTC)
- If we are differentiating three types of RFA we need three terms. Post-recall RFAs are referred to as "reconfirmation RFAs", "Re-RFAS" or "RRFAs" in multiple places, so ones of the type being discussed here are the ones that should take the new term. "Voluntary reconfirmation RFA" (VRRFA or just VRFA) is the only thing that comes to mind but others will probably have better ideas. Thryduulf (talk) 21:00, 17 December 2024 (UTC)
- 2, after thought. I don't think 3 provides much benefit, and creating separate class of RfAs that are speedy passed feels a misstep. If there are serious issues surrounding wasting time on RfAs set up under what might feel to someone like misleading pretenses, that is best solved by putting some indicator next to their RFA candidate name. Maybe "Hog Farm (RRfA)". CMD (talk) 14:49, 16 December 2024 (UTC)
- 1 * Pppery * it has begun... 21:25, 15 December 2024 (UTC)
- 2 I don't see why people trying to do the right thing should be discouraged from doing so. If others feel it is a waste of time, they are free to simply not participate. El Beeblerino 21:27, 15 December 2024 (UTC)
- 2 Getting reconfirmation from the community should be allowed. Those who see it as a waste of time can ignore those RfAs. Schazjmd (talk) 21:32, 15 December 2024 (UTC)
- Of course they may request at RfA. They shouldn't but they may. This RfA feels like it does nothing to address the criticism actually in play and per the link to the idea lab discussion it's premature to boot. Barkeep49 (talk) 21:38, 15 December 2024 (UTC)
- 2 per my comments at the idea lab discussion and Queent of Hears, Beeblebrox and Scazjmd above. I strongly disagree with Barkeep's comment that "They shouldn't ". It shouldn't be made mandatory, but it should be encouraged where the time since desysop and/or the last RFA has been lengthy. Thryduulf (talk) 21:42, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- I've started that discussion as a subsection to the linked VPI discussion. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- When to encourage it would be a worthwhile RfC and such a discussion could be had at the idea lab before launching an RfC. Best, Barkeep49 (talk) 21:44, 15 December 2024 (UTC)
- 1 or 3. RFA is an "expensive" process in terms of community time. RFAs that qualify should be fast-tracked via the BN process. It is only recently that a trend has emerged that folks that don't need to RFA are RFAing again. 2 in the last 6 months. If this continues to scale up, it is going to take up a lot of community time, and create noise in the various RFA statistics and RFA notification systems (for example, watchlist notices and User:Enterprisey/rfa-count-toolbar.js). –Novem Linguae (talk) 21:44, 15 December 2024 (UTC)
- Making statistics "noisy" is just a reason to improve the way the statistics are gathered. In this case collecting statistics for reconfirmation RFAs separately from other RFAs would seem to be both very simple and very effective. If (and it is a very big if) the number of reconfirmation RFAs means that notifications are getting overloaded, then we can discuss whether reconfirmation RFAs should be notified differently. As far as differentiating them, that is also trivially simple - just add a parameter to template:RFA (perhaps "reconfirmation=y") that outputs something that bots and scripts can check for. Thryduulf (talk) 22:11, 15 December 2024 (UTC)
- Option 3 looks like a good compromise. I'd support that too. –Novem Linguae (talk) 22:15, 15 December 2024 (UTC)
- I'm weakly opposed to option 3, editors who want feedback and a renewed mandate from the community should be entitled to it. If they felt that that a quick endorsement was all that was required then could have had that at BN, they explicitly chose not to go that route. Nobody is required to participate in an RFA, so if it is going the way you think it should, or you don't have an opinion, then just don't participate and your time has not been wasted. Thryduulf (talk) 22:20, 15 December 2024 (UTC)
- 2. We should not make it more difficult for administrators to be held accountable for their actions in the way they please. JJPMaster (she/they) 22:00, 15 December 2024 (UTC)
- Added option 3 above. Maybe worth considering as a happy medium, where unsure admins can get a check on their conduct without taking up too much time. theleekycauldron (talk • she/her) 22:11, 15 December 2024 (UTC)
- 2 – If a former admin wishes to subject themselves to RfA to be sure they have the requisite community confidence to regain the tools, why should we stop them? Any editor who feels the process is a waste of time is free to ignore any such RfAs. — Jkudlick ⚓ (talk) 22:12, 15 December 2024 (UTC)
- I would also support option 3 if the time is extended to 72 hours instead of 48. That, however, is a detail that can be worked out after this RfC. — Jkudlick ⚓ (talk) 02:05, 16 December 2024 (UTC)
- Option 3 per leek. voorts (talk/contributions) 22:16, 15 December 2024 (UTC)
- A further note: option 3 gives 'crats the discretion to SNOW close a successful voluntary re-RfA; it doesn't require such a SNOW close, and I trust the 'crats to keep an RfA open if an admin has a good reason for doing so. voorts (talk/contributions) 23:24, 16 December 2024 (UTC)
- 2 as per JJPMaster. Regards, --Goldsztajn (talk) 22:20, 15 December 2024 (UTC)
- Option 2 (no change) – The sample size is far too small for us to analyze the impact of such a change, but I believe RfA should always be available. Now that WP:RECALL is policy, returning administrators may worry that they have become out of touch with community norms and may face a recall as soon as they get their tools back at BN. Having this familiar community touchpoint as an option makes a ton of sense, and would be far less disruptive / demoralizing than a potential recall. Taking this route away, even if it remains rarely used, would be detrimental to our desire for increased administrator accountability. – bradv 22:22, 15 December 2024 (UTC)
- (edit conflict) I'm surprised the response here hasn't been more hostile, given that these give the newly-unresigned administrator a get out of recall free card for a year. —Cryptic 22:25, 15 December 2024 (UTC)
- @Cryptic hostile to what? Thryduulf (talk) 22:26, 15 December 2024 (UTC)
- 2, distant second preference 3. I would probably support 3 as first pick if not for recall's rule regarding last RfA, but as it stands, SNOW-closing a discussion that makes someone immune to recall for a year is a non-starter. Between 1 and 2, though, the only argument for 1 seems to be that it avoids a waste of time, for which there is the much simpler solution of not participating and instead doing something else. Special:Random and Misplaced Pages:Backlog are always there. -- Tamzin (they|xe|🤷) 23:31, 15 December 2024 (UTC)
- 1 would be my preference, but I don't think we need a specific rule for this. -- Ajraddatz (talk) 23:36, 15 December 2024 (UTC)
- Option 1.
No second preference between 2 or 3.As long as a former administrator didn't resign under a cloud, picking up the tools again should be low friction and low effort for the entire community. If there are issues introduced by the recall process, they should be fixed in the recall policy itself. Daniel Quinlan (talk) 01:19, 16 December 2024 (UTC)- After considering this further, I prefer option 3 over option 2 if option 1 is not the consensus. Daniel Quinlan (talk) 07:36, 16 December 2024 (UTC)
- Option 2, i.e. leave well enough alone. There is really not a problem here that needs fixing. If someone doesn’t want to “waste their time” participating in an RfA that’s not required by policy, they can always, well, not participate in the RfA. No one is required to participate in someone else’s RfA, and I struggle to see the point of participating but then complaining about “having to” participate. 28bytes (talk) 01:24, 16 December 2024 (UTC)
- Option 2 nobody is obligated to participate in a re-confirmation RfA. If you think they are a waste of time, avoid them. LEPRICAVARK (talk) 01:49, 16 December 2024 (UTC)
- 1 or 3 per Novem Linguae. C F A 02:35, 16 December 2024 (UTC)
- Option 3: Because it is incredibly silly to have situations like we do now of "this guy did something wrong by doing an RfA that policy explicitly allows, oh well, nothing to do but sit on our hands and dissect the process across three venues and counting." Your time is your own. No one is forcibly stealing it from you. At the same time it is equally silly to let the process drag on, for reasons explained in WP:SNOW. Gnomingstuff (talk) 03:42, 16 December 2024 (UTC)
- Update: Option 2 seems to be the consensus and I also would be fine with that. Gnomingstuff (talk) 18:10, 19 December 2024 (UTC)
- Option 3 per Gnoming. I think 2 works, but it is a very long process and for someone to renew their tools, it feels like an unnecessarily long process compared to a normal RfA. Conyo14 (talk) 04:25, 16 December 2024 (UTC)
- As someone who supported both WormTT and Hog Farm's RfAs, option 1 > option 3 >> option 2. At each individual RfA the question is whether or not a specific editor should be an admin, and in both cases I felt that the answer was clearly "yes". However, I agree that RfA is a very intensive process. It requires a lot of time from the community, as others have argued better than I can. I prefer option 1 to option 3 because the existence of the procedure in option 3 implies that it is a good thing to go through 48 hours of RfA to re-request the mop. But anything which saves community time is a good thing. HouseBlaster (talk • he/they) 04:31, 16 December 2024 (UTC)
- I've seen this assertion made multiple times now that
requires a lot of time from the community
, yet nowhere has anybody articulated how why this is true. What time is required, given that nobody is required to participate and everybody who does choose to participate can spend as much or as little time assessing the candidate as they wish? How and why does a reconfirmation RFA require any more time from editors (individually or collectively) than a request at BN? Thryduulf (talk) 04:58, 16 December 2024 (UTC)- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- BN Is designed for this exact scenario. It's also clearly a less contentious process.
- Snow closures a good example of how we try to avoid wasting community time on unnecessary process and the same reasoning applies here. Misplaced Pages is not a bureaucracy and there's no reason to have a 7-day process when the outcome is a given.
- If former administrators continue to choose re-RFAs over BN, it could set a problematic precedent where future re-adminship candidates feel pressured to go through an RFA and all that entails. I don't want to discourage people already vetted by the community from rejoining the ranks.
- The RFA process is designed to be a thoughtful review of prospective administrators and I'm concerned these kinds of perfunctory RFAs will lead to people taking the process less seriously in the future.
- Daniel Quinlan (talk) 07:31, 16 December 2024 (UTC)
- Because several thousand people have RFA on their watchlist, and thousands more will see the "there's an open RFA" notice on theirs whether they follow it or not. Unlike BN, RFA is a process that depends on community input from a large number of people. In order to even realise that the RFA is not worth their time, they have to:
- Read the opening statement and first few question answers (I just counted, HF's opening and first 5 answers are about 1000 words)
- Think, "oh, they're an an ex-admin, I wonder why they're going through RFA, what was their cloud"
- Read through the comments and votes to see if any issues have been brought up (another ~1000 words)
- None have
- Realise your input is not necessary and this could have been done at BN
- This process will be repeated by hundreds of editors over the course of a week. BugGhost 🦗👻 08:07, 16 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- Something in their statement would be an incredibly obvious reason. We are talking about the assessment whether to examine and whether the candidate could've used BN. Aaron Liu (talk) 12:52, 18 December 2024 (UTC)
- Just to double check, you're saying that whenever you go onto an RFA you expect any reason to oppose to already be listed by someone else, and no thought is required? I am begining to see how you are able to assess an RFA in under 30 seconds BugGhost 🦗👻 23:08, 17 December 2024 (UTC)
- They said they resigned to personal time commitments. That is directly saying they wasn’t under a cloud, so I’ll believe them unless someone claims the contrary in the oppose section. If the disputes section contained a cloud, the oppose section would have said so. One chooses to examine such nominations like normal RfAs. Aaron Liu (talk) 18:47, 17 December 2024 (UTC)
- Not everyone can skim things easily - it personally takes me a while to read sections. I don't know if they're going to bury the lede and say something like "Also I made 10,000 insane redirects and then decided to take a break just before arbcom launched a case" in paragraph 6. Hog Farm's self nom had two paragraphs about disputes and it takes more than 30 seconds to unpick that and determine if that is a "cloud" or not. Even for reconfirmations, it definitely takes more than 30 seconds to determine a conclusion. BugGhost 🦗👻 11:21, 17 December 2024 (UTC)
- That they were former admins has always been the first two sentences of their RfA’s statement, sentences which are immediately followed by that they resigned due to personal time commitment issues. You do not have to read the first 1000+ words to figure that out. If the reader wants to see if the candidate was lying in their statement, then they just have a quick skim through the oppose section. None of this should take more than 30 seconds in total. Aaron Liu (talk) 13:15, 16 December 2024 (UTC)
- @Thryduulf let's not confuse "a lot of community time is spent" with "waste of time". Some people have characterized the re-RFAs as a waste of time but that's not the assertion I (and I think a majority of the skeptics) have been making. All RfAs use a lot of community time as hundreds of voters evaluate the candidate. They then choose to support, oppose, be neutral, or not vote at all. While editor time is not perfectly fixed - editors may choose to spend less time on non-Misplaced Pages activities at certain times - neither is it a resource we have in abundance anymore relative to our project. And so I think we, as a community, need to be thought about how we're using that time especially when the use of that time would have been spent on other wiki activities.Best, Barkeep49 (talk) 22:49, 16 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- I don't understand how your argument that it is not a good use of community time is any different from arguing that it is a waste of time? Thryduulf (talk) 09:08, 17 December 2024 (UTC)
- I am not saying you or anyone else is wasting time and am surprised you are so fervently insisting I am. Best, Barkeep49 (talk) 03:34, 17 December 2024 (UTC)
- Absolutely nothing compels anybody to spend any time evaluating an RFA. If you think your wiki time is better spent elsewhere than evaluating an RFA candidate, then spend it elsewhere. That way only those who do think it is a good use of their time will participate and everybody wins. You win by not spending your time on something that you don't think is worth it, those who do participate don't have their time wasted by having to read comments (that contradict explicit policy) about how the RFA is a waste of time. Personally I regard evaluating whether a long-time admin still has the approval of the community to be a very good use of community time, you are free to disagree, but please don't waste my time by forcing me to read comments about how you think I'm wasting my time. Thryduulf (talk) 23:39, 16 December 2024 (UTC)
- I think there are a number of factors and people are summing it up as "time-wasting" or similar:
- I've seen this assertion made multiple times now that
- Option 2 I don't mind the re-RFAs, but I'd appreciate if we encouraged restoration via BN instead, I just object to making it mandatory. EggRoll97 06:23, 16 December 2024 (UTC)
- Option 2. Banning voluntary re-RfAs would be a step in the wrong direction on admin accountability. Same with SNOW closing. There is no more "wasting of community time" if we let the RfA run for the full seven days, but allowing someone to dig up a scandal on the seventh day is an important part of the RfA process. The only valid criticism I've heard is that folks who do this are arrogant, but banning arrogance, while noble, seems highly impractical. Toadspike 07:24, 16 December 2024 (UTC)
- Option 3, 1, then 2, per HouseBlaster. Also agree with Daniel Quinlan. I think these sorts of RFA's should only be done in exceptional circumstances. Graham87 (talk) 08:46, 16 December 2024 (UTC)
- Option 1 as first preference, option 3 second. RFAs use up a lot of time - hundreds of editors will read the RFA and it takes time to come to a conclusion. When that conclusion is "well that was pointless, my input wasn't needed", it is not a good system. I think transparency and accountability is a very good thing, and we need more of it for resyssopings, but that should come from improving the normal process (BN) rather than using a different one (RFA). My ideas for improving the BN route to make it more transparent and better at getting community input is outlined over on the idea lab BugGhost 🦗👻 08:59, 16 December 2024 (UTC)
- Option 2, though I'd be for option 3 too. I'm all for administrators who feel like they want/should go through an RfA to solicit feedback even if they've been given the tools back already. I see multiple people talk about going through BN, but if I had to hazard a guess, it's way less watched than RfA is. However I do feel like watchlist notifications should say something to the effect of "A request for re-adminship feedback is open for discussion" so that people that don't like these could ignore them. ♠JCW555 (talk)♠ 09:13, 16 December 2024 (UTC)
- Option 2 because WP:ADMINISTRATORS is well-established policy. Read WP:ADMINISTRATORS#Restoration of admin tools, which says quite clearly,
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
I went back 500 edits to 2017 and the wording was substantially the same back then. So, I simply do not understand why various editors are berating former administrators to the point of accusing them of wasting time and being arrogant for choosing to go through a process which is specifically permitted by policy. It is bewildering to me. Cullen328 (talk) 09:56, 16 December 2024 (UTC) - Option 2 & 3 I think that there still should be the choice between BN and re-RFA for resysops, but I think that the re-RFA should stay like it is in Option 3, unless it is controversial, at which point it could be extended to the full RFA period. I feel like this would be the best compromise between not "wasting" community time (which I believe is a very overstated, yet understandable, point) and ensuring that the process is based on broad consensus and that our "representatives" are still supported. If I were WTT or Hog, I might choose to make the same decision so as to be respectful of the possibility of changing consensus. JuxtaposedJacob (talk) | :) | he/him | 10:45, 16 December 2024 (UTC)
- Option 2, for lack of a better choice. Banning re-RFAs is not a great idea, and we should not SNOW close a discussion that would give someone immunity from a certain degree of accountability. I've dropped an idea for an option 4 in the discussion section below. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Option 1 I agree with Graham87 that these sorts of RFAs should only be done in exceptional circumstances, and BN is the best place to ask for tools back. – DreamRimmer (talk) 12:11, 16 December 2024 (UTC)
- Option 2 I don't think prohibition makes sense. It also has weird side effects. eg: some admins' voluntary recall policies may now be completely void, because they would be unable to follow them even if they wanted to, because policy prohibits them from doing a RFA. (maybe if they're also 'under a cloud' it'd fit into exemptions, but if an admins' policy is "3 editors on this named list tell me I'm unfit, I resign" then this isn't really a cloud.) Personally, I think Hog Farm's RFA was unwise, as he's textbook uncontroversial. Worm's was a decent RFA; he's also textbook uncontroversial but it happened at a good time. But any editor participating in these discussions to give the "support" does so using their own time. Everyone who feels their time is wasted can choose to ignore the discussion, and instead it'll pass as 10-0-0 instead of 198-2-4. It just doesn't make sense to prohibit someone from seeking a community discussion, though. For almost anything, really. ProcrastinatingReader (talk) 12:33, 16 December 2024 (UTC)
- Option 2 It takes like two seconds to support or ignore an RFA you think is "useless"... can't understand the hullabaloo around them. I stand by what I said on WTT's re-RFA regarding RFAs being about evaluating trustworthiness and accountability. Trustworthy people don't skip the process. —k6ka 🍁 (Talk · Contributions) 15:24, 16 December 2024 (UTC)
- Option 1 - Option 2 is a waste of community time. - Ratnahastin (talk) 15:30, 16 December 2024 (UTC)
- 2 is fine. Strong oppose to 1 and 3. Opposing option 1 because there is nothing wrong with asking for extra community feedback. opposing option 3 because once an RfA has been started, it should follow the standard rules. Note that RfAs are extremely rare and non-contentious RfAs require very little community time (unlike this RfC which seems a waste of community time, but there we are). —Kusma (talk) 16:59, 16 December 2024 (UTC)
- 2, with no opposition to 3. I see nothing wrong with a former administrator getting re-confirmed by the community, and community vetting seems like a good thing overall. If people think it's a waste of time, then just ignore the RfA. Natg 19 (talk) 17:56, 16 December 2024 (UTC)
- 2 Sure, and clarify that should such an RFA be unsuccessful they may only regain through a future rfa. — xaosflux 18:03, 16 December 2024 (UTC)
- Option 2 If contributing to such an RFA is a waste of your time, just don't participate. TheWikiToby (talk) 18:43, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- I'm an admin and an oversighter (but not a checkuser). None of my time was wasted by either WTT or Hog Farm's nominations. Thryduulf (talk) 23:30, 16 December 2024 (UTC)
- I agree with you (I think) that it's a matter of perspective. For me, clicking the RFA link in my watchlist and reading the first paragraph of Hog Farm's nomination (where they explained that they were already a respected admin) took me about 10 seconds. Ten seconds is nothing; in my opinion, this is just a nonissue. But then again, I'm not an admin, checkuser, or an oversighter. Maybe the time to read such a nomination is really wasting their time. I don't know. TheWikiToby (talk) 23:15, 16 December 2024 (UTC)
- No individual is wasting their time participating. Instead the person asking for a re-rfa is using tons of editor time by asking hundreds of people to vet them. Even the choice not to participate requires at least some time to figure out that this is not a new RfA; though at least in the two we've had recently it would require only as long as it takes to get to the RfA - for many a click from the watchlist and then another click into the rfa page - and to read the first couple of sentences of the self-nomination which isn't terribly long all things considered. Best, Barkeep49 (talk) 22:55, 16 December 2024 (UTC)
- 2. Maintain the status quo. And stop worrying about a trivial non-problem. --Tryptofish (talk) 22:57, 16 December 2024 (UTC)
- 2. This reminds me of banning plastic straws (bear with me). Sure, I suppose in theory, that this is a burden on the community's time (just as straws do end up in landfills/the ocean). However, the amount of community time that is drained is minuscule compared to the amount of community time drained in countless, countless other fora and processes (just like the volume of plastic waste contributed by plastic straws is less than 0.001% of the total plastic waste). When WP becomes an efficient, well oiled machine, then maybe we can talk about saving community time by banning re-RFA's. But this is much ado about nothing, and indeed this plan to save people from themselves, and not allow them to simply decide whether to participate or not, is arguably more damaging than some re-RFAs (just as banning straws convinced some people that "these save-the-planet people are so ridiculous that I'm not going to bother listening to them about anything."). And, in fact, on a separate note, I'd actually love it if more admins just ran a re-RFA whenever they wanted. They would certainly get better feedback than just posting "What do my talk page watchers think?" on their own talk page. Or waiting until they get yelled at on their talk page, AN/ANI, AARV, etc. We say we want admins to respect feedback; does it have to be in a recall petition? --Floquenbeam (talk) 23:44, 16 December 2024 (UTC)
- What meaningful feedback has Hog Farm gotten? "A minority of people think you choose poorly in choosing this process to regain adminship". What are they supposed to do with that? I share your desire for editors to share meaningful feedback with administrators. My own attempt yielded some, though mainly offwiki where I was told I was both too cautious and too impetuous (and despite the seeming contradiction each was valuable in its own way). So yes let's find ways to get meaningful feedback to admins outside of recall or being dragged to ANI. Unfortunately re-RfA seems to be poorly suited to the task and so we can likely find a better way. Best, Barkeep49 (talk) 03:38, 17 December 2024 (UTC)
- Let us all take some comfort in the fact that no one has yet criticized this RfC comment as being a straw man argument. --Tryptofish (talk) 23:58, 18 December 2024 (UTC)
- No hard rule, but we should socially discourage confirmation RfAs There is a difference between a hard rule, and a soft social rule. A hard rule against confirmation RfA's, like option 1, would not do a good job of accounting for edge cases and would thus be ultimately detrimental here. But a soft social rule against them would be beneficial. Unfortunately, that is not one of the options of this RfC. In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers. (Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here.) That takes some introspection and humility to ask yourself: is it worth me inviting two or three hundred people to spend part of their lives to comment on me as a person?A lot of people have thrown around editor time in their reasonings. Obviously, broad generalizations about it aren't convincing anyone. So let me just share my own experience. I saw the watchlist notice open that a new RfA was being run. I reacted with some excitement, because I always like seeing new admins. When I got to the page and saw Hogfarm's name, I immediately thought "isn't he already an admin?" I then assumed, ah, its just the classic RfA reaction at seeing a qualified candidate, so I'll probably support him since I already think he's an admin. But then as I started to do my due diligence and read, I saw that he really, truly, already had been an admin. At that point, my previous excitement turned to a certain unease. I had voted yes for Worm's confirmation RfA, but here was another...and I realized that my blind support for Worm might have been the start of an entirely new process. I then thought "bet there's an RfC going about this," and came here. I then spent a while polishing up my essay on editor time, before taking time to write this message. All in all, I probably spent a good hour doing this. Previously, I'd just been clicking the random article button and gnoming. So, the longwinded moral: yeah, this did eat up a lot of my editor time that could have and was being spent doing something else. And I'd do it again! It was important to do my research and to comment here. But in the future...maybe I won't react quite as excitedly to seeing that RfA notice. Maybe I'll feel a little pang of dread...wondering if its going to be a confirmation RfA. We can't pretend that confirmation RfA's are costless, and that we don't lose anything even if editors just ignore them. When run, it should be because they are necessary. CaptainEek ⚓ 03:29, 17 December 2024 (UTC)
- And for what its worth, support Option 3 because I'm generally a fan of putting more tools in people's toolboxes. CaptainEek ⚓ 03:36, 17 December 2024 (UTC)
In short, a person should have a good reason to do a confirmation RfA. If you're going to stand up before the community and ask "do you trust me," that should be for a good reason. It shouldn't just be because you want the approval of your peers.
Asking the community whether you still have their trust to be an administrator, which is what an reconfirmation RFA is, is a good reason. I expect getting a near-unanimous "yes" is good for one's ego, but that's just a (nice) side-effect of the far more important benefits to the entire community: a trusted administrator.- The time you claim is being eaten up unnecessarily by reconfirmation RFAs was actually taken up by you choosing to spend your time writing an essay about using time for things you don't approve of and then hunting out an RFC in which you wrote another short essay about using time on things you don't approve of. Absolutely none of that is a necessary consequence of reconfirmation RFAs - indeed the response consistent with your stated goals would have been to read the first two sentences of Hog Farm's RFA and then closed the tab and returned to whatever else it was you were doing. Thryduulf (talk) 09:16, 17 December 2024 (UTC)
- WTT's and Hog Farm's RFAs would have been completely uncontentious, something I hope for at RfA and certainly the opposite of what I "dread" at RfA, if it were not for the people who attack the very concept of standing for RfA again despite policy being crystal clear that it is absolutely fine. I don't see how any blame for this situation can be put on WTT or HF. We can't pretend that dismissing uncontentious reconfirmation RfAs is costless; discouraging them removes one of the few remaining potentially wholesome bits about the process. —Kusma (talk) 09:53, 17 December 2024 (UTC)
- @CaptainEek Would you find it better if Watchlist notices and similar said "(re?)confirmation RFA" instead of "RFA"? Say for all voluntary RFAs from an existing admin or someone who could have used BN?
- As a different point, I would be quite against any social discouraging if we're not making a hard rule as such. Social discouraging is what got us the opposes at WTT/Hog Farm's RFAs, which I found quite distasteful and badgering. If people disagree with a process, they should change it. But if the process remains the same, I think it's important to not enable RFA's toxicity by encouraging others to namecall or re-argue the process in each RRFA. It's a short road from social discouragement to toxicity, unfortunately. Soni (talk) 18:41, 19 December 2024 (UTC)
- Yes I think the watchlist notice should specify what kind of RfA, especially with the introduction of recall. CaptainEek ⚓ 16:49, 23 December 2024 (UTC)
- Option 1. Will prevent the unnecessary drama trend we are seeing in the recent. – Ammarpad (talk) 07:18, 17 December 2024 (UTC)
- Option 2 if people think there's a waste of community time, don't spend your time voting or discussing. Or add "reconfirmation" or similar to the watchlist notice. ~~ AirshipJungleman29 (talk) 15:08, 17 December 2024 (UTC)
- Option 3 (which I think is a subset of option 2, so I'm okay with the status quo, but I want to endorse giving 'crats the option to SNOW). While they do come under scrutiny from time to time for the extensive dicsussions in the "maybe" zone following RfAs, this should be taken as an indiciation that they are unlikely to do something like close it as SNOW in the event there is real and substantial concerns being rasied. This is an okay tool to give the 'crats. As far as I can tell, no one has ever accused the them of moving too quickly in this direction (not criticism; love you all, keep up the good work). Bobby Cohn (talk) 17:26, 17 December 2024 (UTC)
- Option 3 or Option 2. Further, if Option 2 passes, I expect it also ends all the bickering about lost community time. A consensus explicitly in favour of "This is allowed" should also be a consensus to discourage relitigation of this RFC. Soni (talk) 17:35, 17 December 2024 (UTC)
- Option 2: Admins who do not exude entitlement are to be praised. Those who criticize this humility should have a look in the mirror before accusing those who ask for reanointment from the community of "arrogance". I agree that it wouldn't be a bad idea to mention in parentheses that the RFA is a reconfirmation (watchlist) and wouldn't see any problem with crats snow-closing after, say, 96 hours. -- SashiRolls 18:48, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I know. I see a sizable amount of commenters here starting to say that voluntary re-RfAs should be encouraged, and your first sentence can be easily read as implying that admins who use the BN route exude entitlement. I disagree with that (see my reply to Thryduulf below). Aaron Liu (talk) 12:56, 18 December 2024 (UTC)
- One way to improve the reputation of RFA is for there to be more RFAs that are not terrible, such as reconfirmations of admins who are doing/have done a good job who sail through with many positive comments. There is no proposal to make RFA mandatory in circumstances it currently isn't, only to reaffirm that those who voluntarily choose RFA are entitled to do so. Thryduulf (talk) 21:06, 17 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
There's nearly nothing in between that could've lost the trust of the community. I'm sure there are many who do not want to be pressured into this without good reason. Aaron Liu (talk) 12:57, 18 December 2024 (UTC)- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- I am talking and worrying over that separate proposal many here are suggesting. I don’t intend to oppose Option 2, and sorry if I came off that way. Aaron Liu (talk) 16:29, 18 December 2024 (UTC)
- Whether they should become preferred in any situation forms no part of this proposal in any way shape or form - this seeks only to reaffirm that they are permitted. A separate suggestion, completely independent of this one, is to encourage (explicitly not mandate) them in some (but explicitly not all) situations. All discussions on this topic would benefit if people stopped misrepresenting the policies and proposals - especially when the falsehoods have been explicitly called out. Thryduulf (talk) 15:49, 18 December 2024 (UTC)
- Yes, I’m saying that they should not become preferred. There should be no social pressure to do RfA instead of BN, only pressure intrinsic to the candidate. Aaron Liu (talk) 15:37, 18 December 2024 (UTC)
- Absolutely nobody is proposing, suggesting or hinting here that reconfirmation RFAs should become mandatory - other than comments from a few people who oppose the idea of people voluntarily choosing to do something policy explicitly allows them to choose to do. The best way to avoid people being pressured into being accused of arrogance for seeking reconfirmation of their status from the community is to sanction those people who accuse people of arrogance in such circumstances as such comments are in flagrant breach of AGF and NPA. Thryduulf (talk) 14:56, 18 December 2024 (UTC)
- I know it's not a proposal, but there's enough people talking about this so far that it could become a proposal.
- Who are you disagreeing with? This RfC is about voluntary RRfA. -- SashiRolls 20:59, 17 December 2024 (UTC)
- I disagree that BN shouldn't be the normal route. RfA is already as hard and soul-crushing as it is. Aaron Liu (talk) 20:45, 17 December 2024 (UTC)
- Option 2. In fact, I'm inclined to encourage an RRfA over BN, because nothing requires editors to participate in an RRfA, but the resulting discussion is better for reaffirming community consensus for the former admin or otherwise providing helpful feedback. --Pinchme123 (talk) 21:45, 17 December 2024 (UTC)
- Option 2 WP:RFA has said "
Former administrators may seek reinstatement of their privileges through RfA...
" for over ten years and this is not a problem. I liked the opportunity to be consulted in the current RfA and don't consider this a waste of time. Andrew🐉(talk) 22:14, 17 December 2024 (UTC) - Option 2. People who think it’s not a good use of their time always have the option to scroll past. Innisfree987 (talk) 01:41, 18 December 2024 (UTC)
- 2 - If an administrator gives up sysop access because they plan to be inactive for a while and want to minimize the attack surface of Misplaced Pages, they should be able to ask for permissions back the quickest way possible. If an administrator resigns because they do not intend to do the job anymore, and later changes their mind, they should request a community discussion. The right course of action depends on the situation. Jehochman 14:00, 18 December 2024 (UTC)
- Option 1. I've watched a lot of RFAs and re-RFAs over the years. There's a darn good reason why the community developed the "go to BN" option: saves time, is straightforward, and if there are issues that point to a re-RFA, they're quickly surfaced. People who refuse to take the community-developed process of going to BN first are basically telling the community that they need the community's full attention on their quest to re-admin. Yes, there are those who may be directed to re-RFA by the bureaucrats, in which case, they have followed the community's carefully crafted process, and their re-RFA should be evaluated from that perspective. Risker (talk) 02:34, 19 December 2024 (UTC)
- Option 2. If people want to choose to go through an RFA, who are we to stop them? Stifle (talk) 10:25, 19 December 2024 (UTC)
- Option 2 (status quo/no changes) per meh. This is bureaucratic rulemongering at its finest. Every time RFA reform comes up some editors want admins to be required to periodically reconfirm, then when some admins decide to reconfirm voluntarily, suddenly that's seen as a bad thing. The correct thing to do here is nothing. If you don't like voluntary reconfirmation RFAs, you are not required to participate in them. Ivanvector (/Edits) 19:34, 19 December 2024 (UTC)
- Option 2 I would probably counsel just going to BN most of the time, however there are exceptions and edge cases. To this point these RfAs have been few in number, so the costs incurred are relatively minor. If the number becomes large then it might be worth revisiting, but I don't see that as likely. Some people will probably impose social costs on those who start them by opposing these RfAs, with the usual result, but that doesn't really change the overall analysis. Perhaps it would be better if our idiosyncratic internal logic didn't produce such outcomes, but that's a separate issue and frankly not really worth fighting over either. There's probably some meta issues here I'm unaware off, it's long since I've had my finger on the community pulse so to speak, but they tend to matter far less than people think they do. 184.152.68.190 (talk) 02:28, 20 December 2024 (UTC)
- Option 1, per WP:POINT, WP:NOT#SOCIALNETWORK, WP:NOT#BUREAUCRACY, WP:NOTABOUTYOU, and related principles. We all have far better things to do that read through and argue in/about a totally unnecessary RfA invoked as a "Show me some love!" abuse of process and waste of community time and productivity. I could live with option 3, if option 1 doesn't fly (i.e. shut these silly things down as quickly as possible). But option 2 is just out of the question. — SMcCandlish ☏ ¢ 😼 04:28, 22 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
RfA invoked as a "Show me some love!" abuse of process
, you're arguing against a strawman. Thryduulf (talk) 11:41, 22 December 2024 (UTC)- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
I and many others here agree and stand behind the very reasoning that has "confused" such candidates, at least for WTT. Aaron Liu (talk) 15:37, 23 December 2024 (UTC)or because you've somehow confused yourself into think it needs to be done
- It's entirely a matter of opinion and perception, or A) this RfC wouldn't exist, and B) various of your fellow admins like TonyBallioni would not have come to the same conclusion I have. Whether the underlying intent (which no one can determine, lacking as we do any magical mind-reading powers) is solely egotistical is ultimately irrelevant. The actual effect (what matters) of doing this whether for attention, or because you've somehow confused yourself into think it needs to be done, is precisely the same: a showy waste of community volunteers' time with no result other than a bunch of attention being drawn to a particular editor and their deeds, without any actual need for the community to engage in a lengthy formal process to re-examine them. — SMcCandlish ☏ ¢ 😼 05:49, 23 December 2024 (UTC)
- Except none of the re-RFAs complained about have been
- Option 2. I see no legitimate reason why we should be changing the status quo. Sure, some former admins might find it easier to go through BN, and it might save community time, and most former admins already choose the easier option. However, if a candidate last ran for adminship several years ago, or if issues were raised during their tenure as admin, then it may be helpful for them to ask for community feedback, anyway. There is no "wasted" community time in such a case. I really don't get the claims that this violates WP:POINT, because it really doesn't apply when a former admin last ran for adminship 10 or 20 years ago or wants to know if they still have community trust.On the other hand, if an editor thinks a re-RFA is a waste of community time, they can simply choose not to participate in that RFA. Opposing individual candidates' re-RFAs based solely on opposition to re-RFAs in general is a violation of WP:POINT. – Epicgenius (talk) 14:46, 22 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- Well thank you both, I've learned something new today. It turns out I was working on a false assumption. It has just been so long since a re-RfA that I assumed it was a truly new phenomenon, especially since there were two in short succession. I still can't say I'm thrilled by the process and think it should be used sparingly, but perhaps I was a bit over concerned. CaptainEek ⚓ 16:47, 23 December 2024 (UTC)
- To add on to that, Misplaced Pages:Requests for adminship/Harrias 2 was technically a reconfirmation RFA, which in a sense can be treated as a re-RFA. My point is, there is some precedent for re-RFAs, but the current guidelines are ambiguous as to when re-RFAs are or aren't allowed. – Epicgenius (talk) 16:34, 23 December 2024 (UTC)
- There have been lots of re-RFAs, historically. There were more common in the 2000s. Evercat in 2003 is the earliest I can find, back before the re-sysopping system had been worked out fully. Croat Canuck back in 2007 was snow-closed after one day, because the nominator and applicant didn't know that they could have gone to the bureaucrats' noticeboard. For more modern examples, HJ Mitchell (2011) is relatively similar to the recent re-RFAs in the sense that the admin resigned uncontroversially but chose to re-RFA before getting the tools back. Immediately following and inspired by HJ Mitchell's, there was the slightly more controversial SarekOfVulcan. That ended successful re-RFAS until 2019's Floquenbeam, which crat-chatted. Since then, there have been none that I remember. There have been several re-RFAs from admins who were de-sysopped or at serious risk of de-sysopping, and a few interesting edge cases such as the potentially optional yet no-consensus SarekVulcan 3 in 2014 and the Rich Farmbrough case in 2015, but those are very different than what we're talking about today. GreenLipstickLesbian (talk) 00:01, 23 December 2024 (UTC)
- But this isn't the status quo? We've never done a re-RfA before now. The question is whether this previously unconsidered process, which appeared as an emergent behavior, is a feature or a bug. CaptainEek ⚓ 23:01, 22 December 2024 (UTC)
- Option 2 or 3 per Gnoming and CaptainEek. Such RfAs only require at most 30 seconds for one to decide whether or not to spend their time on examination. Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Voluntary reconfirmation RfAs are socially discouraged, so there is usually a very good reason for someone to go back there, such as accountability for past statements in the case of WTT or large disputes during adminship in the case of Hog Farm. I don't think we should outright deny these, and there is no disruption incurred if we don't. Aaron Liu (talk) 15:44, 23 December 2024 (UTC)
- Option 2 but for largely the reasons presented by CaptainEek. KevinL (aka L235 · t · c) 21:58, 23 December 2024 (UTC)
- Option 2 (fine with better labeling) These don't seem harmful to me and, if I don't have time, I'll skip one and trust the judgment of my fellow editors. No objection to better labeling them though, as discussed above. RevelationDirect (talk) 22:36, 23 December 2024 (UTC)
- Option 1 because it's just a waste of time to go through and !vote on candidates who just want the mop restored when he or she or they could get it restored BN with no problems. But I can also see option 2 being good for a former mod not in good standing. Therapyisgood (talk) 23:05, 23 December 2024 (UTC)
- If you think it is a waste of time to !vote on a candidate, just don't vote on that candidate and none of your time has been wasted. Thryduulf (talk) 23:28, 23 December 2024 (UTC)
- Option 2 per QoH (or me? who knows...) Kline • talk • contribs 04:24, 27 December 2024 (UTC)
Discussion
- @Voorts: If option 2 gets consensus how would this RfC change the wording
Regardless of the process by which the admin tools are removed, any editor is free to re-request the tools through the requests for adminship process.
Or is this an attempt to see if that option no longer has consensus? If so why wasn't alternative wording proposed? As I noted above this feels premature in multiple ways. Best, Barkeep49 (talk) 21:43, 15 December 2024 (UTC) - I've re-opened this per a request on my talk page. If other editors think this is premature, they can !vote accordingly and an uninvolved closer can determine if there's consensus for an early close in deference to the VPI discussion. voorts (talk/contributions) 21:53, 15 December 2024 (UTC)
- The discussion at VPI, which I have replied on, seems to me to be different enough from this discussion that both can run concurrently. That is, however, my opinion as a mere editor. — Jkudlick ⚓ (talk) 22:01, 15 December 2024 (UTC)
- @Voorts, can you please reword the RfC to make it clear that Option 2 is the current consensus version? It does not need to be clarified – it already says precisely what you propose. – bradv 22:02, 15 December 2024 (UTC)
- Question: May someone clarify why many view such confirmation RfAs as a waste of community time? No editor is obligated to take up their time and participate. If there's nothing to discuss, then there's no friction or dis-cussing, and the RfA smooth-sails; if a problem is identified, then there was a good reason to go to RfA. I'm sure I'm missing something here. Aaron Liu (talk) 22:35, 15 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- I’m confused. Adminship requires continued use of the tools. If you think they’s suitable for BN, I don’t see how doing an RfA suddenly makes them unsuitable. If you have concerns, raise them. Aaron Liu (talk) 13:02, 16 December 2024 (UTC)
- Except someone who has no need for advanced tools and is not going to use them in any useful fashion, would then skate through with nary a word said about their unsuitability, regardless of the foregone conclusion. The point of RFA is not to rubber-stamp. Unless their is some actual issue or genuine concern they might not get their tools back, they should just re-request them at BN and stop wasting people's time with pointless non-process wonkery. Only in death does duty end (talk) 09:05, 16 December 2024 (UTC)
- Unlike other prohibited timesinks, it's not like something undesirable will happen if one does not sink their time. Aaron Liu (talk) 02:31, 16 December 2024 (UTC)
- Sure, but that logic could be used to justify any time sink. We're all volunteers and nobody is forced to do anything here, but that doesn't mean that we should promote (or stay silent with our criticism of, I suppose) things that we feel don't serve a useful purpose. I don't think this is a huge deal myself, but we've got two in a short period of time and I'd prefer to do a bit of push back now before they get more common. -- Ajraddatz (talk) 01:52, 16 December 2024 (UTC)
- But no volunteer is obligated to pat such candidates on the back. Aaron Liu (talk) 00:33, 16 December 2024 (UTC)
- The intent of RfA is to provide a comprehensive review of a candidate for adminship, to make sure that they meet the community's standards. Is that happening with vanity re-RfAs? Absolutely not, because these people don't need that level of vetting. I wouldn't consider a week long, publicly advertized back patting to be a productive use of volunteer time. -- Ajraddatz (talk) 23:33, 15 December 2024 (UTC)
- I don't think the suggested problem (which I acknowledge not everyone thinks is a problem) is resolved by these options. Admins can still run a re-confirmation RfA after regaining adminsitrative privileges, or even initiate a recall petition. I think as discussed on Barkeep49's talk page, we want to encourage former admins who are unsure if they continue to be trusted by the community at a sufficient level to explore lower cost ways of determining this. isaacl (talk) 00:32, 16 December 2024 (UTC)
- Regarding option 3, establishing a consensus view takes patience. The intent of having a reconfirmation request for administrative privileges is counteracted by closing it swiftly. It provides incentive for rapid voting that may not provide the desired considered feedback. isaacl (talk) 17:44, 17 December 2024 (UTC)
- In re the idea that RfAs use up a lot of community time: I first started editing Misplaced Pages in 2014. There were 62 RfAs that year, which was a historic low. Even counting all of the AElect candidates as separate RfAs, including those withdrawn before voting began, we're still up to only 53 in 2024 – counting only traditional RfAs it's only 18, which is the second lowest number ever. By my count we've has 8 resysop requests at BN in 2024; even if all of those went to RfA, I don't see how that would overwhelm the community. That would still leave us on 26 traditional RfAs per year, or (assuming all of them run the full week) one every other week. Caeciliusinhorto-public (talk) 10:26, 16 December 2024 (UTC)
- What about an option 4 encouraging eligible candidates to go through BN? At the end of the Procedure section, add something like "Eligible users are encouraged to use this method rather than running a new request for adminship." The current wording makes re-RfAing sound like a plausible alternative to a BN request, when in actual fact the former rarely happens and always generates criticism. Giraffer (talk) 12:08, 16 December 2024 (UTC)
- Discouraging RFAs is the second last thing we should be doing (after prohibiting them), rather per my comments here and in the VPI discussion we should be encouraging former administrators to demonstrate that they still have the approval of the community. Thryduulf (talk) 12:16, 16 December 2024 (UTC)
- I think this is a good idea if people do decide to go with option 2, if only to stave off any further mixed messages that people are doing something wrong or rude or time-wasting or whatever by doing a second RfA, when it's explicitly mentioned as a valid thing for them to do. Gnomingstuff (talk) 15:04, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- The simplest way would be to just quickly hat/remove all such comments. Pretty soon people will stop making them. Thryduulf (talk) 23:20, 16 December 2024 (UTC)
- Also a solid option, the question is whether people will actually do it. Gnomingstuff (talk) 22:55, 16 December 2024 (UTC)
- If RFA is explicitly a valid thing for people to do (which it is, and is being reaffirmed by the growing consensus for option 2) then we don't need to (and shouldn't) discourage people from using that option. The mixed messages can be staved off by people simply not making comments that explicitly contradict policy. Thryduulf (talk) 15:30, 16 December 2024 (UTC)
- This is not new. We've had sporadic "vanity" RfAs since the early days of the process. I don't believe they're particularly harmful, and think that it unlikely that we will begin to see so many of them that they pose a problem. As such I don't think this policy proposal solves any problem we actually have. UninvitedCompany 21:56, 16 December 2024 (UTC)
- This apparent negative feeling evoked at an RFA for a former sysop everyone agrees is fully qualified and trusted certainly will put a bad taste in the mouths of other former admins who might consider a reconfirmation RFA without first visiting BN. This comes in the wake of Worm That Turned's similar rerun. BusterD (talk) 23:29, 16 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- No argument from me. I was a big Hog Farm backer way back when he was merely one of Misplaced Pages's best content contributors. BusterD (talk) 12:10, 17 December 2024 (UTC)
- Nobody should ever be discouraged from seeking community consensus for significant changes. Adminship is a significant change. Thryduulf (talk) 23:32, 16 December 2024 (UTC)
- All these mentions of editor time make me have to mention The Grand Unified Theory of Editor Time (TLDR: our understanding of how editor time works is dreadfully incomplete). CaptainEek ⚓ 02:44, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- If people decide to go through a community vote to get a one year immunity from a process that only might lead to a community vote which would then have a lower threshold then the one they decide to go through, and also give a year's immunity, then good for them. CMD (talk) 01:05, 22 December 2024 (UTC)
- @CaptainEek: You are correct that a year's "immunity" results from a successful RRFA, but I see no evidence that this has been the reason for the RRFAs. Regards, Newyorkbrad (talk) 00:14, 22 December 2024 (UTC)
- @CaptainEek I'm mildly bothered by this comment, mildly because I assume it's lighthearted and non-serious. But just in case anyone does feel this way - I was very clear about my reasons for RRFA, I've written a lot about it, anyone is welcome to use my personal recall process without prejudice, and just to be super clear - I waive my "1 year immunity" - if someone wants to start a petition in the next year, do not use my RRfA as a reason not to. I'll update my userpage accordingly. I can't speak for Hog Farm, but his reasoning seems similar to mine, and immunity isn't it. Worm(talk) 10:28, 23 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
Let me be clear: I am not suggesting that is why either Worm or Hogfarm re-upped, I'm just trying to create a general purpose rule here
. I guess what I really meant was that the reason that we're having this somewhat spirited conversation seems to be the sense that re-RfA could provide a protection from recall. If not for recall and the one year immunity period, I doubt we'd have cared so much as to suddenly run two discussions about this. CaptainEek ⚓ 16:59, 23 December 2024 (UTC)- I don't agree. No one else has raised a concern about someone seeking a one-year respite from a recall petition. Personally, I think essentially self-initiating the recall process doesn't really fit the profile of someone who wants to avoid the recall process. (I could invent some nefarious hypothetical situation, but since opening an arbitration case is still a possibility, I don't think it would work out as planned.) isaacl (talk) 05:19, 24 December 2024 (UTC)
- @Worm That Turned my quickly written comment was perhaps not as clear as it could have been :) I'm sorry, I didn't mean to suggest that y'all had run for dubious reasons. As I said in my !vote,
- I really don't think this is the reason behind WTT's and HF's reconfirmation RFA's. I don't think their RFA's had much utility and could have been avoided, but I don't doubt for a second that their motivations were anything other than trying to provide transparency and accountability for the community. BugGhost 🦗👻 12:04, 23 December 2024 (UTC)
- Worm That Turned followed through on a pledge he made in January 2024, before the 2024 review of the request for adminship process began. I don't think a pattern can be extrapolated from a sample size of one (or even two). That being said, it's probably a good thing if admins occasionally take stock of whether or not they continue to hold the trust of the community. As I previously commented, it would be great if these admins would use a lower cost way of sampling the community's opinion. isaacl (talk) 18:31, 17 December 2024 (UTC)
- I went looking for @Tamzin's comment because I know they had hung up the tools and came back, and I was interested in their perspective. But they've given me a different epiphany. I suddenly realize why people are doing confirmation RfAs: it's because of RECALL, and the one year immunity a successful RfA gives you. Maybe everyone else already figured that one out and is thinking "well duh Eek," but I guess I hadn't :) I'm not exactly sure what to do with that epiphany, besides note the emergent behavior that policy change can create. We managed to generate an entirely new process without writing a single word about it, and that's honestly impressive :P CaptainEek ⚓ 18:18, 17 December 2024 (UTC)
- I don't really care enough about reconf RFAs to think they should be restricted, but what about a lighter ORCP-like process (maybe even in the same place) where fewer editors can indicate, "yeah OK, there aren't really any concerns here, it would probably save a bit of time if you just asked at BN". Alpha3031 (t • c) 12:40, 19 December 2024 (UTC)
Audio-video guidance
Hi there,
Per the post I made a few weeks ago regarding use of video for illustrative purposes, I think that MOS:Images might be expanded to make mention of audio-video content, as most of the same principles apply (eg aesthetics, quality, relevance, placement). There are some additional concerns, for example, if audio or video renders a primary source, eg is a recording of PD music such as Bach or similar; or is a reading of a PD text, then there might be some source validation requirements (ie, the music or text should match the original, within sensible boundaries, eg Mozart or Bach pieces may not be easily replicated with original instrumentation, or at least this should not be a requirement.
So one option would be for a simple statement at MOS:Images that these guidelines normally apply to AV, or separate guidance for AV that explains that MOS:Images contains guidance that generally applies to AV.
Is the correct process to raise an RFC? And is that done at MOS:Images, or WP:MOS, or here, or where? Jim Killock (talk) 19:38, 16 December 2024 (UTC)
- I've posted a longer request for help explaining the gap at MOS talk. It seems an RFC may not be needed but any advice would very much be appreciated. Jim Killock (talk) 20:28, 16 December 2024 (UTC)
Policy proposal: Establishment of research groups to edit articles
WITHDRAWN My policy proposal was too controversial in its original form and poorly thought out in its revision. Cnscrptr (talk) 23:48, 17 December 2024 (UTC)The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
In order to have more reliable and unbiased articles and make Misplaced Pages fully reliable, I believe it necessary for articles and topics to be created and handled by bodies of editors called "Research groups", focusing on researching about topics and articles to ensure Misplaced Pages articles are as accurate, neutral, and unbiased as possible.
I also propose that editing articles will be reserved to their respective research groups and creating articles about a topic can only be done by said groups, with non-members being able to propose changes and articles via RFCs and whatnot. To join a research group, one must complete thorough training in the following areas
- Bias reduction and neutrality training.
- Finding reliable sources and comparing sources.
- Professional research.
- Lessons about the topic.
- Misplaced Pages policy.
Any other applicable areas
This policy would also reduce vandalism and guideline violations across Misplaced Pages, making situations easier to handle.
Since Misplaced Pages is a widely-used source, it is time to move away from the current decentralized approach where just about anyone (no matter how inexperienced and biased) can edit, which has resulted in article bias, vandalism, unreliable sources, poor editing, sockpuppet accounts, edit wars, controversies, and a host of other problems.
A Research-Group-based editing Misplaced Pages will be far more stable and solve many of our persistent problems. Cnscrptr (talk) 15:28, 17 December 2024 (UTC)
- So something more like Citizendium? Schazjmd (talk) 16:11, 17 December 2024 (UTC)
- So basically we would deprecate half our policies and guidelines? Honestly trying to be nice but this is a terrible idea Horse Eye's Back (talk) 16:14, 17 December 2024 (UTC)
- Your proposal describes something that would not be Misplaced Pages at all (remember, we are the free encyclopaedia that anyone can edit), so is better suited for a fork. Usually proposals like yours (with editing only permitted to small numbers of "trusted" people) only work for small projects, if at all (I expect you would have to pay people to consider going through the "research group" training; it certainly does not sound attractive at all to me as a volunteer). You will need tens of thousands of editors to have any chance at not becoming outdated immediately. —Kusma (talk) 16:56, 17 December 2024 (UTC)
it is time to move away from the current decentralized approach where just about anyone can edit
- this proposal is dead on arrival - I support a SNOW close. BugGhost 🦗👻 17:30, 17 December 2024 (UTC)- Congratulations! We're completely rejecting Misplaced Pages's open content, its editors' direct contributions (henceforth to be filtered through a new Misplaced Pages Politburo), with the basic principles of Misplaced Pages's founders soundly rejected. "Research group training" says it all. Mason.Jones (talk) 18:15, 17 December 2024 (UTC)
- Respectfully, I think this is a horrible idea. The decentralized model is what the entire site was built upon. It is even one of the first things advertised on the Main Page! If we got rid of it, the site wouldn't truly be Misplaced Pages anymore. QuicoleJR (talk) 18:20, 17 December 2024 (UTC)
- I mean, I think Misplaced Pages is big enough to drop the "free encyclopaedia" bit and attempt a nupedia-like approach. However, I digress and have dropped that clause in my proposal. Cnscrptr (talk) 20:06, 17 December 2024 (UTC)
- Few modifications of my proposal based on feedback:
- Keep the decentralized model.
- Drop the extensive training clause. No extensive training will be required.
- Individuals are expected to learn about the topic before editing.
- It will be necessary to join research groups (i.e., groups made of editors collaborating on the topic) to better learn about the topic and better contributing to it.
- Editing of various articles, including election articles, will require an extended-confirmed status.
- Cnscrptr (talk) 20:04, 17 December 2024 (UTC)
- So these groups are basically Wikiprojects but more restrictive? Tarlby 20:17, 17 December 2024 (UTC)
Modified proposal: Research panels between editors and greater article protections
I got your feedback on this policy proposal and decided to overhaul the proposal based to maintain the free nature of Misplaced Pages and strong editor collaboration while ensuring accuracy and reliability.
In order for Misplaced Pages's articles to be more reliable and less biased and to make Misplaced Pages a more reliable site, I believe that there should be a mandatory creation of Research panels, topic collaborations between editors focused on gathering information, comparing sources, establishing editing policies and guidelines in accordance with general Misplaced Pages rules, and ensuring editing guidelines. Formation of research panels will be necessary in every topic.
- The editing model will remain decentralized. However, Research panels will determine policies based on consensus for their respective topics.
- Individuals can join a research group freely. However, individuals are expected to follow the policies and protocols set forth by the research group.
- Individuals are expected to learn about the topic before editing.
- It will be necessary to join research panels to better learn about the topic and better contribute to it.
- Various controversial articles, including election articles, will receive special protection or extended-confirmed protection upon request from the respectively Research panel.
Research Panels will allow people to better understand the topic while integrating cooperation efforts to find the most accurate information and create the best possible articles.
--Cnscrptr (talk) 20:36, 17 December 2024 (UTC)
- This still feels like Wikiprojects with more rules. Tarlby 20:39, 17 December 2024 (UTC)
- Ah, then we can use that and implement it more strongly across every topic. We do need some sort of organization, though. Cnscrptr (talk) 20:41, 17 December 2024 (UTC)
- This modified proposal is sloppy at worst and doubt it will get any approval. However, I think y'all should take whatever good ideas are there (if any) and also create spaces or something like that for greater learning and collaboration among editors to resolve conflicts and biases. Cnscrptr (talk) 20:39, 17 December 2024 (UTC)
- "Individuals are expected to learn about topic before editing"+"It will be necessary to join research panels to better learn about the topic" seems to add up to "Users are expected to be members of research panels before editing topics in that realm."
- So if I look at an article about "semiotics" (about which I know nothing) I am not allowed to correct, say, some MOS:LQ failures, some incorrect header capitalization, and a misspelling of Charles Schulz's last name until I train myself in semiotics and join the Official Misplaced Pages Semiotics Fun Team? -- Nat Gertler (talk) 20:48, 17 December 2024 (UTC)
- The whole training clause has been dropped and you could freely join the panels, so you would be allowed to do all that. Cnscrptr (talk) 20:50, 17 December 2024 (UTC)
- Also, the whole editing thing only applies to major edits, not minor ones.
- Either way, this isn't my best work. The first proposal was essentially nupedia revivalism and the modification is a sloppy attempt at salvaging it based on the original essence of investigating about a topic further to put out the most accurate information (as is necessary in the Proto-Sinaitic script article) Cnscrptr (talk) 20:53, 17 December 2024 (UTC)
- I understand that I would be allowed to jump through whatever hoops you erect in order to be able to correct misspellings, but that seems an effective way to discourage me from improving the encyclopedia and getting it in line with general standards. (I am also unconvinced that this will reduce bias; it would seem to help solidify a smaller group of editors on various topics who are likely to be driven, quite possibly by bias, to go through the effort which you're adding.) -- Nat Gertler (talk) 20:54, 17 December 2024 (UTC)
- (Note, that was written before you made the "minor edit" change.) -- Nat Gertler (talk) 20:55, 17 December 2024 (UTC)
- Regardless, I am dropping my proposal because it is poorly written and will probably not be helpful at all. Cnscrptr (talk) 21:21, 17 December 2024 (UTC)
Is the "above 8000 words = split" an absolute rule?
I am referring to this chart found on WP:SIZE:
Word count | What to do |
---|---|
> 15,000 words | Almost certainly should be divided or trimmed. |
> 9,000 words | Probably should be divided or trimmed, though the scope of a topic can sometimes justify the added reading material. |
> 8,000 words | May need to be divided or trimmed; likelihood goes up with size. |
< 6,000 words | Length alone does not justify division or trimming. |
< 150 words | If an article or list has remained this size for over two months, consider merging it with a related article. Alternatively, the article could be expanded; see Misplaced Pages:Stub. |
I have seen a few instances where, an editor will raise the issue that an article is too large at, say, 7500 words or 8100 words. We have multiple history pages (and medical/psychology pages) with well over 11,000+ words, even some with over 16000. Where does one draw the line? It seems like Misplaced Pages leaves it to the discretion of the editor after about 8000 words. Plasticwonder (talk) 07:21, 18 December 2024 (UTC)
- Looking at the table, it's obvious that "above 8000 words=Split" is not "an absolute rule". I promise you that if it were, that table would say something that sounded remarkably like "if the article is above 8,000 words, then it absolutely must be split".
- Additionally, we have an official policy against absolute rules.
- Where one draws the line is: In a place that makes sense for the topic of that specific article, having thoughtfully considered all the facts and circumstances that apply to that unique article. WhatamIdoing (talk) 07:41, 18 December 2024 (UTC)
- There was a lengthy discussion at Misplaced Pages talk:Article size/Archive 6 about the size guidance, for the record. Splitting pages is a lot of work and not everyone thinks that spreading stuff over multiple pages is better for readers than having in one big page. Jo-Jo Eumerus (talk) 08:19, 18 December 2024 (UTC)
- In addition to the above, what matters for the technical aspects of article size is not the number of words but the number of bytes. Word count can only ever be an approximation of that as the length of the words used matters ("a" is 1 byte, "comprehensive" is 13), the number and size of included media matters very significantly more. Thryduulf (talk) 09:20, 18 December 2024 (UTC)
- I think WP:PEIS is a bigger technical challenge for long articles. The more templates, and the more complicated templates, the more likely you are to need to split for technical reasons. List of common misconceptions needs a split in part due to PEIS reasons. WhatamIdoing (talk) 18:43, 18 December 2024 (UTC)
- In addition to the above, what matters for the technical aspects of article size is not the number of words but the number of bytes. Word count can only ever be an approximation of that as the length of the words used matters ("a" is 1 byte, "comprehensive" is 13), the number and size of included media matters very significantly more. Thryduulf (talk) 09:20, 18 December 2024 (UTC)
- What's more, there's nothing even in the excerpt here that would purport an absolute guideline. Remsense ‥ 论 09:44, 18 December 2024 (UTC)
- It isn't an absolute rule, but usually an article having an extremely long high word count is evidence of a bigger problem with WP:SUMMARYSTYLE -- that it's too dense or detailed for a reader to use it as a first-port-of-call summary. As such, usually, it's a wise move to create daughter articles for the detailed material, and strim it down to its essentials in the main article; this improves the readability of the main article and allows interested readers to follow up into the nitty-gritty. As Jo-Jo Eumerus rightly says above, though, there's not really such thing as an absolute rule in this place. UndercoverClassicist 09:59, 18 December 2024 (UTC)
- What we now know is that many readers are looking for specific information, with few reading from top to bottom, but the search engines send them to the mother article even when a more specific daughter article exists. So the first port of call needs to be the most highly detailed. The advice in WP:SUMMARYSTYLE is therefore considered well intentioned but obsolete; stripping the mother article and pushing information down to the daughter articles defeats our whole purpose in providing information. Hawkeye7 (discuss) 11:31, 22 December 2024 (UTC)
- When you say “we know…”, “is considered” and similar, are you pointing to specific previous discussions, RfCs etc on this matter? “In the wild”, as it were, I still see these size limits regularly invoked, even if the conversation rarely ends at them. UndercoverClassicist 09:17, 23 December 2024 (UTC)
- What we now know is that many readers are looking for specific information, with few reading from top to bottom, but the search engines send them to the mother article even when a more specific daughter article exists. So the first port of call needs to be the most highly detailed. The advice in WP:SUMMARYSTYLE is therefore considered well intentioned but obsolete; stripping the mother article and pushing information down to the daughter articles defeats our whole purpose in providing information. Hawkeye7 (discuss) 11:31, 22 December 2024 (UTC)
- It isn't an absolute rule, but usually an article having an extremely long high word count is evidence of a bigger problem with WP:SUMMARYSTYLE -- that it's too dense or detailed for a reader to use it as a first-port-of-call summary. As such, usually, it's a wise move to create daughter articles for the detailed material, and strim it down to its essentials in the main article; this improves the readability of the main article and allows interested readers to follow up into the nitty-gritty. As Jo-Jo Eumerus rightly says above, though, there's not really such thing as an absolute rule in this place. UndercoverClassicist 09:59, 18 December 2024 (UTC)
- Rather than draw a line, I'd rather just remove that chart. Can't imagine why a suite of concrete word counts and procedures would ever be very helpful. — Rhododendrites \\ 16:29, 18 December 2024 (UTC)
- It absolutely makes sense to say at what page sizes that editors should start considering other options, as well as where splitting can be absolutely unnecessary. Nothing wrong with the table as long as it's clear those aren't hard or fast rules. Masem (t) 16:41, 18 December 2024 (UTC)
- Agreed, I find it helpful because it helps me remember what is generally too long for mobile users (I understand that mobile is generally a blindspot for us as editors because the vast majority of us don't edit on mobile but most of the readers are actually on mobile) Horse Eye's Back (talk) 16:49, 18 December 2024 (UTC)
- I also believe that the chart is helpful. WhatamIdoing (talk) 17:29, 18 December 2024 (UTC)
- Agreed, I find it helpful because it helps me remember what is generally too long for mobile users (I understand that mobile is generally a blindspot for us as editors because the vast majority of us don't edit on mobile but most of the readers are actually on mobile) Horse Eye's Back (talk) 16:49, 18 December 2024 (UTC)
- It absolutely makes sense to say at what page sizes that editors should start considering other options, as well as where splitting can be absolutely unnecessary. Nothing wrong with the table as long as it's clear those aren't hard or fast rules. Masem (t) 16:41, 18 December 2024 (UTC)
- There don't seem to be any absolute rules laid out there... Even "Almost certainly" is qualified not an absolute rule. Horse Eye's Back (talk) 16:38, 18 December 2024 (UTC)
- The optimal article size varies quite a lot, actually. Key things we need to consider include:
- The likely readership. Someone who's looking up History of Libya under Muammar Gaddafi probably has time to read something long and thoughtful. Someone who's looking up emergency surgery might need basic facts, in simple words, very fast.
- The cognitive load associated with the topic. Star Wars is (very) long but easy to understand; Fourier inversion theorem is much shorter, but I bet it takes you longer to read, unless you have unusual expertise in mathematics.
- This is not the kind of thing on which we can produce simplistic guidance.—S Marshall T/C 17:03, 18 December 2024 (UTC)
- Misplaced Pages leaves it to the discretion of editors far far before 8,000 words. We have thousands of single sentence articles to attest to this. The average article is less than 700 words. CMD (talk) 17:15, 18 December 2024 (UTC)
- The median article length is around 350 words, and the mean is 750. About one in 75 articles has more than 6,000 words. WhatamIdoing (talk) 17:34, 18 December 2024 (UTC)
- You'll have to take the specifics up with WP:WPSIZE, although that ballpark range sounds the same. CMD (talk) 18:45, 18 December 2024 (UTC)
- The median article length is around 350 words, and the mean is 750. About one in 75 articles has more than 6,000 words. WhatamIdoing (talk) 17:34, 18 December 2024 (UTC)
- No. Phil Bridger (talk) 18:52, 18 December 2024 (UTC)
- I've always felt that the kB of readable prose was a better metric for page size (such as is produced by various page size gadgets). Turns out, bigger words take longer to read than shorter words :P Doing it just by wordcount encourages a certain verbosity. For me, my rule of thumb has always aimed to keep big articles under 100kb readable prose. But there's no hard and fast rule, and there shouldn't be. CaptainEek ⚓ 23:08, 22 December 2024 (UTC)
- If I'm being honest, what might be the best metric is starting at the top and lightly hammering the Page Down key for a bit. If I groan before reaching the References section, it's too long. Remsense ‥ 论 23:10, 22 December 2024 (UTC)
- For example, results were heavily discouraging for George Washington until recently; as of today I no longer despair at the article's uncaring girth—thanks Nikki et al.! Remsense ‥ 论 23:18, 22 December 2024 (UTC)
- 26,000 words is 1.0 tomats. Another way to look at that table is by saying that if it's more than half the length of a book, it's pushing past being "an article" and edging up towards being "a book".
- Or you can look at it in terms of how many minutes reading the whole thing would take. There's quite a bit of variation, but for easy math, 300 words per minute means that a 15,000-word-long article would take 50 minutes to read, which almost certainly exceeds the interest and attention span of most readers. WhatamIdoing (talk) 00:35, 23 December 2024 (UTC)
- I think the most fundamental scalar isn't quite reading time or even visual size, but structural complexity—for an online encyclopedia article, being overlong expresses itself in my increasing inability to navigate an article comfortably to read or locate what I want, or to understand the structure of the scope covered by it at a glance. Remsense ‥ 论 00:38, 23 December 2024 (UTC)
- Come to think of it, one thing that makes an article feel longer than its word count is if its sections, media, and other landmarks have been laid out in a careless or unnatural way. Remsense ‥ 论 00:48, 23 December 2024 (UTC)
- I think the most fundamental scalar isn't quite reading time or even visual size, but structural complexity—for an online encyclopedia article, being overlong expresses itself in my increasing inability to navigate an article comfortably to read or locate what I want, or to understand the structure of the scope covered by it at a glance. Remsense ‥ 论 00:38, 23 December 2024 (UTC)
- For example, results were heavily discouraging for George Washington until recently; as of today I no longer despair at the article's uncaring girth—thanks Nikki et al.! Remsense ‥ 论 23:18, 22 December 2024 (UTC)
- If I'm being honest, what might be the best metric is starting at the top and lightly hammering the Page Down key for a bit. If I groan before reaching the References section, it's too long. Remsense ‥ 论 23:10, 22 December 2024 (UTC)
- No. And this was rather a pointless, self-answering question in the first place, not something for a VP thread. The answer to the posed question of 'Is the "above 8000 words=Split" an absolute rule?' is obviously "no", both by observing actual WP community practice, and just by reading the table the OP quoted:
> 8,000 words — May need to be divided or trimmed; likelihood goes up with size
. Is anyone here actually confused into believing that A) "must" and "may" are synonymous, or B) that a guideline, to which reasonable exceptions sometimes apply, is somehow a legal-level policy that must be obeyed at all costs? In reality, there is never any hurry to split a large article, and doing it properly often involves a tremendous amount of work, involving both repair of citations (sometimes in great detail), and resummarizing the background context in the side article while also resummarizing the side-matter in WP:SUMMARY style within the main article (and doing them distinctly enough that the results are not obnoxiously repetitive if the reader moves between the articles). Doing a good job of this can take several days up to a month or longer of tightly focused work, depending on the detail level of the material, the number citations, etc. It is not trivial, we're all volunteers here, and our readers are not going keel over and die if they reach a detailed article that's a bit longer than they were expecting or would prefer. Ultimately, an article that is ginormous usually should split, but there is no deadline, and it needs to be done properly (plus there are often conceptually different ways to go about it from a content-flow perspective, and that might require some consensus discussion). — SMcCandlish ☏ ¢ 😼 01:22, 23 December 2024 (UTC)- Ever since WAID reminded me of it, I've thought we should maybe link RFC 2119 somewhere as a lemma. Remsense ‥ 论 01:36, 23 December 2024 (UTC)
- I think I linked it once in Misplaced Pages:Policies and guidelines, years ago, and someone objected. I didn't follow up to see whether the objecting editor is one of the handful who think that should is a more polite and/or IAR-compliant way to say must, but as that's a fairly uncommon POV among editors, it probably wasn't. WhatamIdoing (talk) 05:41, 23 December 2024 (UTC)
- The linked document pushes very hard on should, "here may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed" is not a low bar. It sounds much like must except when IAR. CMD (talk) 09:39, 23 December 2024 (UTC)
- I think I linked it once in Misplaced Pages:Policies and guidelines, years ago, and someone objected. I didn't follow up to see whether the objecting editor is one of the handful who think that should is a more polite and/or IAR-compliant way to say must, but as that's a fairly uncommon POV among editors, it probably wasn't. WhatamIdoing (talk) 05:41, 23 December 2024 (UTC)
- Ever since WAID reminded me of it, I've thought we should maybe link RFC 2119 somewhere as a lemma. Remsense ‥ 论 01:36, 23 December 2024 (UTC)
Draft:Manual of Style/Israel- and Palestine-related articles
I want to propose Draft:Manual of Style/Israel- and Palestine-related articles as a new guideline with an RfC. I'm starting this thread in case any outsiders to this area want to provide input or have questions. For context, the goal of this manual of style is to get agreement on broad principles to make editing easier in this topic area. As an example, WP:PIA5 is dealing with inconsistent use of the word "massacre" specifically, which has caused much arguing over whether there is a double standard, so this guideline makes the standards we should be using explicit. Chess (talk) (please mention me on reply) 06:38, 21 December 2024 (UTC)
- Are all four points currently included based on previous conversations, or are any novel to this proposal? On the broader framing, I wouldn't create the guideline solely talking about NPOV, it is helpful to have guidelines to help describe encyclopaedic style if nothing else. On the example of massacre, I'm not up to date with the recent or ongoing discussions, but I'm surprised there's no mention in the draft of WP:WTW, as this seems a classic case. CMD (talk) 07:00, 21 December 2024 (UTC)
- @Chipmunkdavis: The settlements language comes from previous discussions/consensus, likewise with the West Bank naming conventions section. The categorization language comes from a suggestion on the talk page of the draft.
- The "massacre" part is a novel part of the proposal. I would say that it seems like an obvious WP:WTW, but there's many RMs in the topic area in which editors use their own definition of the term "massacre" (like civilian death toll, mechanism of killing, see User:BilledMammal/ARBPIA_RM_statistics for a list of 24 RMs about the term "massacre"), and argue about whether or not the event meets that standard. I want to make it easier for editors to disengage by citing this guideline, instead of getting into lengthy arguments over points that don't contribute much to consensus (which is what happens now).
- The reason the guideline is short is because I believe it will give it the best chance of passing right now. In the future, I'd like to get consensus to add more points to this guideline. I thought proposing a lengthy guideline upfront would create a scenario in which everyone at an RfC agrees a guideline would benefit the area, but individual editors have a small issue that prevents them from supporting this specific version. Chess (talk) (please mention me on reply) 07:50, 21 December 2024 (UTC)
- At a quick look, it seems most pages in that statistics page were not moved to a title with "massacre"? Seems sensible to collate previous discussions together, if massacres is new (as new as something discussed 24 times can be?) there might need to confirm that one. CMD (talk) 08:20, 21 December 2024 (UTC)
- One issue with the massacre bit is that should be part of an AT naming convention, not in the MOS. Likewise, appropriate categorizations usually aren't in the MOS. voorts (talk/contributions) 15:27, 21 December 2024 (UTC)
I (+other people) co-authored a draft article, but its submission was declined
@Walter Tau: This is not the place to ask questions about your drafts. You should ask @Bobby Cohn on his talk page. voorts (talk/contributions) 15:29, 22 December 2024 (UTC)The following discussion is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.
supposedly due to: "This submission is not adequately supported by reliable sources". It seems to me, that there are planety of good references now. I want to understand, what is missing - and- how it can be improved. A side note: there are many published articles in English Misplaced Pages, which are of much lower quality, than this draft: https://en.wikipedia.org/Draft:Maternity_capital Walter Tau (talk) 15:19, 22 December 2024 (UTC)
The discussion above is closed. Please do not modify it. Subsequent comments should be made on the appropriate discussion page. No further edits should be made to this discussion.Is it time to write an official policy on usage of AI?
Moved from Misplaced Pages:Administrators' noticeboard § Is it time to write an official policy on usage of AI? – voorts (talk/contributions) 03:20, 25 December 2024 (UTC)I've been following WP:ANI for a few weeks now (because I like gossip apparently) and I've noticed several incidents involving persons suspected of using AI for editing. Is it time to create an official policy to point to? WP:AI exists, but it's an informational article rather than a policy page. (I don't actually know which page to put this on so it's going here.) guninvalid (talk) 02:25, 25 December 2024 (UTC)
- Misplaced Pages:Village pump (policy)#LLM/chatbot comments in discussions is ongoing now for those interested. CMD (talk) 02:33, 25 December 2024 (UTC)
- See also this RfC. If this is an issue you're interested in, you might want to try getting involved with WikiProject AI Cleanup. voorts (talk/contributions) 03:22, 25 December 2024 (UTC)
- Please see Misplaced Pages:Large language models for an essay on this topic. Folks have been discussing this issue for a while now, whether or not it can become policy is another debate. — The Hand That Feeds You: 19:24, 25 December 2024 (UTC)
- Of note, there was an RfC in October 2023 where there was a consensus against promoting that essay to a policy or guideline. voorts (talk/contributions) 19:31, 25 December 2024 (UTC)