Misplaced Pages

:Village pump (proposals): Difference between revisions - Misplaced Pages

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 16:38, 31 January 2010 editDESiegel (talk | contribs)Extended confirmed users50,971 edits Words...← Previous edit Latest revision as of 09:37, 25 December 2024 edit undoEdin75 (talk | contribs)Extended confirmed users30,879 edits Category:Current sports events 
Line 1: Line 1:
{{redirect|WP:PROPOSE|proposing article deletion|Misplaced Pages:Proposed deletion|and|Misplaced Pages:Deletion requests}}
<noinclude>{{Villagepumppages|Proposals|New ideas and proposals are discussed here. ''Before submitting'':
<noinclude>{{short description|Discussion page for new proposals}}{{pp-move-indef}}{{Village pump page header|Proposals|alpha=yes|
* Check if your proposal is already described at ''']'''.
The '''proposals''' section of the ] is used to offer specific changes for discussion. ''Before submitting'':
* Proposed '''software''' changes that have gained consensus should be filed at .
* Check to see whether your proposal is already described at ''']'''. You may also wish to search the ].
* Proposed '''policy''' changes belong at ].
* This page is for '''concrete, actionable''' proposals. Consider developing earlier-stage proposals at ].
* Proposed '''policy''' changes belong at ].
* Proposed '''speedy deletion criteria''' belong at ].
* Proposed '''WikiProjects''' or '''task forces''' may be submitted at ]. * Proposed '''WikiProjects''' or '''task forces''' may be submitted at ].
* Proposed '''new wikis''' belong at ].<!-- Villagepumppages intro end -->|WP:VPR|WP:VP/PR|WP:PROPS}}__NEWSECTIONLINK__<!-- * Proposed '''new wikis''' belong at ].
* Proposed '''new articles''' belong at ].
* Discussions or proposals which warrant the '''attention or involvement of the Wikimedia Foundation''' belong at ].
* '''Software''' changes which have consensus should be filed at ].
Discussions are automatically archived after remaining inactive for nine days.<!--
Villagepumppages intro end
-->|WP:VPR|WP:VP/PR|WP:VPPRO|WP:PROPS}}__NEWSECTIONLINK__
{{User:MiszaBot/config
| algo = old(9d)
| archive = Misplaced Pages:Village pump (proposals)/Archive %(counter)d
| counter = 215
| maxarchivesize = 300K
| archiveheader = {{Misplaced Pages:Village pump/Archive header}}
| minthreadstoarchive = 1
| minthreadsleft = 5
}}
{{centralized discussion|compact=yes}}
__TOC__
{{anchor|below_toc}}
]
]
]
]
</noinclude>
{{clear}}


== RfC: Log the use of the ] at both the merge target and merge source ==
-->{{User:MiszaBot/config
|archiveheader = {{Misplaced Pages:Village pump/Archive header}}
|maxarchivesize = 300K
|counter = 57
|algo = old(7d)
|archive = Misplaced Pages:Village pump (proposals)/Archive %(counter)d
}}<!--


<!-- ] 16:01, 25 December 2024 (UTC) -->{{User:ClueBot III/DoNotArchiveUntil|1735142470}}
-->{{User:ClueBot III/ArchiveThis
|archiveprefix=Misplaced Pages:Village pump (proposals)/Archive
|format= %%i
|age=168
|index=no
|minkeepthreads=5
|minarchthreads=3
|archivenow={{User:ClueBot III/ArchiveNow}},{{resolved}},{{Resolved}}
|header={{Misplaced Pages:Village pump/Archive header}}
|nogenerateindex=1
|maxarchsize=300000
|numberstart=48
}}<!--


Currently, there are open proposing that the use of the HistMerge tool be logged at the target article in addition to the source article. Several proposals have been made:
-->]]
*'''Option 1a''': When using ], a null edit should be placed in both the merge target and merge source's page's histories stating that a history merge took place.
]]
*: (]: '''Special:MergeHistory should place a null edit in the page's history describing the merge''', authored Jul 13 2023)
]
*'''Option 1b''': When using ], add a log entry recorded for the articles at the both HistMerge target and source that records the existence of a history merge.
]]]
*: (]: '''Merging pages should add a log entry to the destination page''', authored Nov 8 2015)
]<!--
*'''Option 2''': Do not log the use of the ] tool at the merge target, maintaining the current status quo.
Should the use of the HistMerge tool be explicitly logged? If so, should the use be logged via an entry in the page history or should it instead be held in a dedicated log? — ]&nbsp;<sub>]</sub> 15:51, 20 November 2024 (UTC)
===Survey: Log the use of the ]===
*'''Option 1a/b'''. I am in principle in support of adding this logging functionality, since people don't typically have access to the source article title (where the histmerge is currently logged) when viewing an article in the wild. There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. As for whether this is logged directly in the page history (as is done currently with page protection) or if this is merely in a separate log file, I don't have particularly strong feelings, but I do think that adding functionality to log histmerges at the target article would improve clarity in page histories. — ]&nbsp;<sub>]</sub> 15:51, 20 November 2024 (UTC)
*'''Option 1a/b'''. No strong feelings on which way is best (I'll let the experienced histmergers comment on this), but logging a history merge definitely seems like a useful feature. ] (] · ]) 16:02, 20 November 2024 (UTC)
*'''Option 1a/b'''. Choatic Enby has said exactly what I would have said (but more concisely) had they not said it first. ] (]) 16:23, 20 November 2024 (UTC)
*'''1b''' would be most important to me but but '''1a''' would be nice too. But this is really not the place for this sort of discussion, as noted below. ] (]) 16:28, 20 November 2024 (UTC)
* '''Option 2''' History merging done right should be seamless, leaving the page indistinguishable from if the copy-paste move being repaired had never happened. Adding extra annotations everywhere runs counter to that goal. Prefer 1b to 1a if we have to do one of them, as the extra null edits could easily interfere with the history merge being done in more complicated situations. ] ] 16:49, 20 November 2024 (UTC)
*:Could you expound on why they should be indistinguishable? I don't see how this could harm any utility. A log action at the target page would not show up in the history anyways, and a null edit would have no effect on comparing revisions. ] (]) 17:29, 20 November 2024 (UTC)
*:: Why shouldn't it be indistinguishable? Why it it necessary to go out of our way to say even louder that someone did something wrong and it had to be cleaned up? ] ] 17:45, 20 November 2024 (UTC)
*:::All cleanup actions are logged to all the pages they affect. ] (]) 18:32, 20 November 2024 (UTC)
* '''2''' History merges ], so this survey name is somewhat off the mark. As someone who does this work: I do not think these should be displayed at either location. It would cause a lot of noise in history pages that people probably would not fundamentally understand (2 revisions for "please process this" and "remove tag" and a 3rd revision for the suggested log), and it would be "out of order" in that you will have merged a bunch of revisions but none of those revisions would be nearby the entry in the history page itself. I also find protections noisy in this way as well, and when moves end up causing a need for history merging, you end up with doubled move entries in the merged history, which also is confusing. Adding history merges to that case? No thanks. History merges are more like deletions and undeletions, which already do not add displayed content to the history view. ] (]) 16:54, 20 November 2024 (UTC)
*:They presently are logged, but only at the source article. Take for example . When I search for the merge target, I get . It's only when I search the that I'm able to get a result, but there isn't a way to ''know'' the merge source.
*:If I don't know when or if the histmerge took place, and I don't know what article the history was merged from, I'd have to look through the entirety of the merge log manually to figure that out—and that's suboptimal. — ]&nbsp;<sub>]</sub> 17:05, 20 November 2024 (UTC)
*::... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)
*::But ignoring that, why is it valuable to know this information? What do you gain? And is what you gain actually valuable to your end objective? For example, let's take your {{tq|There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful.}} Is not the revisions left behind in the page history by both the person requesting and the person performing the histmerge not enough (see {{tl|histmerge}})? There are history merges done that don't have that request format such as the WikiProject history merge format, but those are almost always ancient revisions, so what are you gaining there? And where they are not ancient revisions, they are trivial kinds of the form "draft x -> page y, I hate that I even had to interact with this history merge it was so trivial (but also these are great because I don't have to spend significant time on them)". ] (]) 17:32, 20 November 2024 (UTC)
*:::{{tqb|... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)}}I don't think everyone would necessarily agree (see Toadspike's comment below). ] (] · ]) 17:42, 20 November 2024 (UTC)
*:::Page moves ''do'' leave a null edit on the page that describes where the page was moved from and was moved to. And it's easy to work backwards from there to figure out the page move history. The same cannot be said of the ] tool, which doesn't make it easy to re-construct what the heck went on unless we start diving naïvely through the logs. — ]&nbsp;<sub>]</sub> 17:50, 20 November 2024 (UTC)
*::It can be *possible* to find the original history merge source page without looking through the merge log, but the method for doing so is very brittle and extremeley hacky. Basically, look for redirects to the page using "What links here", and find the redirect whose first edit has an unusual byte difference. This relies on the redirect being stable and not deleted or retargetted. There is also ] that relies on byte difference bugs as described in the above-linked discussion by ]. Both of those are ... particularly awful. ] (]) 03:48, 21 November 2024 (UTC)
*:::In the given example, the history-merge occurred ]. Your "log" is the edit summaries. "Created page with '..." is the edit summary left by a normal page creation. But wait, there is page history before the edit that created the page. How did it get there? Hmm, the previous edit summary "Declining submission: v - Submission is improperly sourced (AFCH)" tips you off to look for the same title in draft: namespace. Anyone looking for help with understanding a particular merge may ask me and I'll probably be able to figure it out for you. – ] (]) 05:51, 21 November 2024 (UTC)
*:::Here's another example, of a merge within mainspace. The ] (created by the MediaWiki software) of this ] "Removed redirect to {{no redirect|Jordan B. Acker}}" points you to the page that was merged at that point. . . . – ] (]) 13:44, 21 November 2024 (UTC)
*::::There are times where those traces aren't left. ] (]) 13:51, 21 November 2024 (UTC)
*:::::Here's another scenario, this one from ]. The shows an edit adding '''+5,800''' bytes, leaving the page with 5,800 bytes. But the previous edit did not leave a blank page. Some say this is a bug, but it's also a feature. That "bug" is actually your "log" reporting that a hist-merge occurred at that edit. , the log for that page shows a temp delete & undelete setting the page up for a merge. The first item on the log:
*::::::@ 20:14, 16 January 2021 Tbhotch moved page ] to {{no redirect|Flag of the Republic of Yucatán}} (Correct name)
*:::::clues you in to where to look for the source of the merge. , that single edit which removed '''−5,633''' bytes tells you that previous history was merged off of that page. The provides the details. – ] (]) 16:03, 21 November 2024 (UTC)
*:::::(]: '''Special:MergeHistory causes incorrect byte change values in history''', authored Dec 2 2014) <!-- Template:Unsigned --><small class="autosigned">—&nbsp;Preceding ] comment added by ] (] • ]) 18:13, 21 November 2024 (UTC)</small>
*::::::Again, there are times where the clues are much harder to find, and even in those cases, it'd be much better to have a unified and assured way of finding the source. ] (]) 16:11, 21 November 2024 (UTC)
*:::::::Indeed. This is a prime example of an unintended ]. ] (]) 08:50, 22 November 2024 (UTC)
*::::::::Yeah. I don't think that we can permanently rely on that, given that future versions of MediaWiki are not bound in any real way to support that workaround. — ]&nbsp;<sub>]</sub> 04:24, 3 December 2024 (UTC)
*'''Support 1b''' (log only), oppose 1a (null edit). I defer to the experienced histmergers on this, and if they say that adding null edits everywhere would be inconvenient, I believe them. However, I haven't seen any arguments against logging the histmerge at both articles, so I'll support it as a sensible idea. (On a similar note, it bothers me that page moves are only logged at one title, not both.) ] </span>]] 17:10, 20 November 2024 (UTC)
* '''Option 2'''. The merges are ], so there’s no reason to add it to page histories. While it may be useful for habitual editors, it will just confuse readers who are looking for an old revision and occasional editors. ] &amp; ]<sub>(])</sub> 18:33, 20 November 2024 (UTC)
*:But only the source page is logged as the "target". IIRC it currently can be a bit hard to find out when and who merged history into a page if you don't know the source page and the mergeperson didn't leave any editing indication that they merged something. ] (]) 18:40, 20 November 2024 (UTC)
*'''1B'''. The present situation of the action being only logged at one page is confusing and unhelpful. But so would be injecting null-edits all over the place. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 01:38, 21 November 2024 (UTC)
* '''Option 2'''. This exercise is dependent on finding a volunteer MediaWiki developer willing to work on this. Good luck with that. Maybe you'll find one a decade from now. – ] (]) 05:51, 21 November 2024 (UTC)
*: And, more importantly, someone in the to review it. I suspect there are many people, possibly including myself, who would code this if they didn't think they were wasting their time shuffling things from one queue to another. ] ] 06:03, 21 November 2024 (UTC)
*::That link requires a Gerrit login/developer account to view. It was a struggle to get in to mine (I only have one because of an old Toolforge account and I'd basically forgotten about it), but for those who don't want to go through all that, that group has only 82 members (several of whose usernames I recognise) and I imagine they have a lot on their collective plate. There's more information about these groups at ]. ] (]) 15:38, 21 November 2024 (UTC)
*::: Sorry, I totally forgot Gerrit behaved in that counterintuitive way and hid public information from logged out users for no reason. The things you miss if Gerrit interactions become something you do pretty much every day. If you want to count the members of the group you also have to follow the chain of included groups - it also includes https://ldap.toolforge.org/group/wmf, https://ldap.toolforge.org/group/ops and (another login-only link), as well as a few other permission edge cases (almost all of which are redundant because the user is already in the MediaWiki group) ] ] 18:07, 21 November 2024 (UTC)
*'''Support 1a/b''', and I would encourage the closer to disregard any opposition based solely on the chances of someone ever actually implementing it. <span style="white-space: nowrap;">—]&nbsp;<sup>(]·])</sup></span> 12:52, 21 November 2024 (UTC)
*:Fine. This stupid RfC isn't even asking the right questions. Why did I need to delete (an expensive operation) and then restore a page in order to Should we fix the software so that it doesn't require me to do that? Why did the page-mover resort to cut-paste because there was page history blocking their move, rather than ask a administrator for help? Why doesn't the software just let them move over that junk page history themselves, which would negate the need for a later hist-merge? (Actually in this case the offending user only has made 46 edits, so they don't have page-mover privileges. But they were able to move a page. They just couldn't move it back a day later after they changed their mind.) ] (]) 13:44, 21 November 2024 (UTC)
*::Yeah, ] would be amazing, for a start. ] (]) 15:38, 21 November 2024 (UTC)
*'''Option 1b'''{{snd}}changes to a page's history should be listed in that page's log. There's no need to make a null edit; pagemove null edits are useful because they meaningfully fit into the page's revision history, which isn't the case here. ] (]) 00:55, 22 November 2024 (UTC)
*'''Option 1b''' sounds best since that's what those in the know seem to agree on, but 1a would probably be OK. ] (]) 03:44, 23 November 2024 (UTC)
*'''Option 1b''' seems like the one with the best transparency to me. Thanks. <span style="text-shadow:3px 3px 3px lightblue">]<sup>'''537'''<sub>]</sub> (]|])</sup></span> 06:59, 25 November 2024 (UTC)


===Discussion: Log the use of the ]===
-->
*I'm noticing some commentary in the above RfC (on widening importer rights) as to whether or not this might be useful going forward. I do think that having the community weigh in one way or another here would be helpful in terms of deciding whether or not this functionality is worth building. — ]&nbsp;<sub>]</sub> 15:51, 20 November 2024 (UTC)
<table width="100%" style="background: transparent;">
*:<small>] . — ]&nbsp;<sub>]</sub> 16:01, 20 November 2024 (UTC)</small>
<tr><td valign="top" width="50%"> __TOC__
*This is a missing feature, not a config change. ] (]) 15:58, 20 November 2024 (UTC)
<td valign="top"> {{cent|width=auto}}
*:Indeed; it's about a feature proposal. — ]&nbsp;<sub>]</sub> 16:02, 20 November 2024 (UTC)
</table>
*As many of the above, this is a ] and not something that should be special for the English Misplaced Pages. — ] <sup>]</sup> 16:03, 20 November 2024 (UTC)
<span id="below_toc"/>
*:See ]. I'm not seeing any sort of reason this would need per-project opt-ins requiring a local discussion. — ] <sup>]</sup> 16:05, 20 November 2024 (UTC)
]]
*:True, but I agree with Red-tailed hawk that it's good to have the English Misplaced Pages community weigh on whether we want that feature implemented here to begin with. ] (] · ]) 16:05, 20 November 2024 (UTC)
]</noinclude><!--
* Here is the , and the project's . – ] (]) 18:13, 21 November 2024 (UTC)
* I agree that this is an odd thing to RFC. This is about a feature in MediaWiki core, and there are a lot more users of MediaWiki core than just English Misplaced Pages. However, please do post the results of this RFC to both of the phab tickets. It will be a useful data point with regards to what editors would find useful. –] <small>(])</small> 23:16, 21 November 2024 (UTC)


== CheckUser for all new users ==
-->


All new users (IPs and accounts) should be subject to CheckUser against known socks. This would prevent recidivist socks from returning and save the time and energy of users who have to prove a likely case at SPI. Recidivist socks often get better at covering their "tells" each time making detection increasingly difficult. Users should not have to make the huge effort of establishing an SPI when editing from an IP or creating a new account is so easy. We should not have to endure ], ] or ] if CheckUser can prevent them. ] (]) 04:06, 22 November 2024 (UTC)
== Suggest removal of 'famous people' sections from articles when found. ==


:I'm pretty sure that even if we had enough checkuser capacity to routinely run checks on every new user that doing so would be contrary to global policy. ] (]) 04:14, 22 November 2024 (UTC)
Such sections often tend to be interpretive and are generally abused. What 'famous' people came from an area should be irrelevant; knowing this information doesn't make the place any 'better'. ]] 21:54, 9 January 2010 (UTC)
:Setting aside privacy issues, the fact that the WMF wouldn't let us do it, and a few other things: Checking a single account, without any idea of who you're comparing them to, is not very effective, and the worst LTAs are the ones it would be least effective against. This has been floated several times in the much narrower context of adminship candidates, and rejected each time. It probably belongs on ] by now. <span style="font-family:courier"> -- ]</span><sup class="nowrap">&#91;]&#93;</sup> <small>(])</small> 04:21, 22 November 2024 (UTC)
::Why can't it be automated? What are the privacy issues and what would WMF concerns be? There has to be a better system than SPI which imposes a huge burden on the filer (and often fails to catch socks) while we just leave the door open for LTAs. ] (]) 04:39, 22 November 2024 (UTC)
:::How would it be automated? We can't just block everyone who even sometimes shares an IP with someone, which is most editors once you factor in mobile editing and institutional WiFi. Even if we had a system that told checkusers about all shared-IP situations and asked them to investigate, what are they investigating for? The vast majority of IP overlaps will be entirely innocent, often people who don't even know each other. There's no way for a checkuser to find any signal in all that noise. So the only way a system like this would work is if checkusers manually identified IP ranges that are being used by LTAs, and then placed blocks on those ranges to restrict them from account creation... Which is what already happens. <span style="font-family:courier"> -- ]</span><sup class="nowrap">&#91;]&#93;</sup> <small>(])</small> 04:58, 22 November 2024 (UTC)
::::I would assume that IT experts can work out a way to automate CheckUser. If someone edits on a shared IP used by a previous sock that should be flagged and human CheckUsers notified so they can look at the edits and the previous sock edits and warn or block as necessary. ] (]) 05:46, 22 November 2024 (UTC)
:::::We already have ]. For cases it doesn't catch, there's an additional manual layer of blocking, where if a sock is caught on an IP that's been used before but wasn't caught by autoblock, a checkuser will block the IP if it's technically feasible, sometimes for months or years at a time. Beyond that, I don't think you can imagine just how often "someone edits on a shared IP used by a previous sock". I'm doing that right now, probably, because I'm editing through T-Mobile. Basically anyone who's ever edited in India or Nigeria has been on an IP used by a previous sock. Basically anyone who's used a large institution's WiFi. There is not any way to weed through all that noise with automation. <span style="font-family:courier"> -- ]</span><sup class="nowrap">&#91;]&#93;</sup> <small>(])</small> 05:54, 22 November 2024 (UTC)
::::::Addendum: An actually potentially workable innovation would be something like a system that notifies CUs if an IP is autoblocked more than once in a certain time period. That would be a software proposal for Phabricator, though, not an enwiki policy proposal, and would still have privacy implications that would need to be squared with the WMF. <span style="font-family:courier"> -- ]</span><sup class="nowrap">&#91;]&#93;</sup> <small>(])</small> 05:57, 22 November 2024 (UTC)
::::::I believe Tamzin has it about right, but I want to clarify a thing. If you're hypothetically using T-Mobile (and this also applies to many other ISPs and many LTAs) then the odds are very high that you're using an IP address which has never been used before. With T-Mobile, which is not unusually large by any means, you belong to at least one /32 range which contains a number of IP addresses so big that it has 30 digits. These ranges contain a huge number of users. At the other extreme you have some countries with only a handful of IPs, which everyone uses. These IPs also typically contain a huge number of users. TLDR; is someone is using a single IP on their own then we'll probably just block it, otherwise you're talking about matching a huge number of users. -- ] <sup>]</sup> 03:20, 23 November 2024 (UTC)
:::::::As I understand it, if you're hypothetically using T-Mobile, then you're not editing, because someone range-blocked the whole network in pursuit of a vandal(s). See ]. ] (]) 03:36, 23 November 2024 (UTC)
::::::::T-Mobile USA is a perennial favourite of many of the most despicable LTAs, but that's besides the point. New users with an account can actually edit from T-Mobile. They can also edit from Jio, or Deutsche Telecom, Vodafone, or many other huge networks. -- ] <sup>]</sup> 03:50, 23 November 2024 (UTC)
:Would violate the policy ]. –] <small>(])</small> 04:43, 22 November 2024 (UTC)
::It would apply to '''every new User''' as a protective measure against sockpuppetry, like a credit check before you get a card/overdraft. ] is archaic like the whole burdensome SPI system that forces honest users to do all the hard work of proving sockpuppetry while socks and vandals just keep being welcomed in under ]. ] (]) 05:46, 22 November 2024 (UTC)
:::What you're suggesting is to just inundate checkusers with thousands of cases. The suggestion (as I understand it) removes burden from SPI filers by adding a disproportional burden on checkusers, who are already an overworked group. If you're suggesting an automated solution, then I believe IP blocks/IP range blocks and autoblock (discussed by Tamzin, above) already cover enough. It's quite hard to weigh up what you're really suggesting because it feels very vague without much detail - it sounds like you're just saying "a new SPI should be opened for every new user and IP, forever" which is not really a workable solution (for instance, ], which is about one every 18 seconds) ]] 18:12, 22 November 2024 (UTC)
::::And most of those accounts will make zero, one, or two edits, and then never be used again. Even if we liked this idea, doing it for every single account creation would be a waste of resources. ] (]) 23:43, 22 November 2024 (UTC)
:No, they should not. ] (]/]) 17:23, 22 November 2024 (UTC)
:This, very bluntly, ], and as noted by Tamzin this would result in frankly ''obscene'' amounts of collateral damage. You have absolutely no idea how frequently IP addresses get passed around (especially in the developing world or on ]), such that it could feasibly have three different, unrelated, people on it over the course of a day or so. —] ] <sup><small>] ]</small></sup> 18:59, 22 November 2024 (UTC)
:{{Question|label=Just out of curiosity}} If a certain ] is any indication, would a CU be able to stop that in its track? <span style="color:#7E790E;">2601AC47</span> (]|]) <span style="font-size:80%"><span style="color:grey;">Isn't a IP anon</span></span> 14:29, 23 November 2024 (UTC)
::CU's use their tools to identify socks when technical proof is necessary. The problem you're linking to is caused by one particular ] account who is extremely obvious and doesn't really require technical proof to identify - check users would just be able to provide evidence for something that is already easy to spot. There's an essay on the distinction over at ] ]] 14:45, 23 November 2024 (UTC)
::{{ping|2601AC47}} No, and that is because the user in question's MO is to abuse VPNs. Checkuser is worthless in this case because of that (but the IPs ''can and should'' be blocked for 1yr as ]). —] ] <sup><small>] ]</small></sup> 19:35, 26 November 2024 (UTC)
:::] is using a peer-to-peer VPN service which is similar to ]. Blocking peer-to-peer VPN service endpoint IP addresses carries a higher risk of collateral damage because those aren't assigned to the VPN provider but rather a third party ISP who is likely to dynamically reassign the blocked address to a completely innocent party. ] (]) 00:22, 27 November 2024 (UTC)
:I slightly oppose this idea. This is not ] where socks are immediately banned or shadowbanned outright. Reddit doesn't have ] as any wiki does. ] (]) 00:14, 25 November 2024 (UTC)
::How do you know this is how Reddit deals with ban and suspension evasion? They use advanced techniques such as device and IP fingerprinting to ban and suspend users in under an hour. ] (]) 23:47, 28 November 2024 (UTC)
:I can see where this is coming from, but we must realise that checkuser is not ] nor is it meant for ]. - ] (]) 04:49, 27 November 2024 (UTC)
::The question I ask myself is why must we realize that it is not meant for fishing? To catch fish, you need to fish. The no-fishing rule is not fit for purpose, nor is it a rule that other organizations that actively search for ban evasion use. Machines can do the fishing. They only need to show us the fish they caught. ] (]) 05:24, 27 November 2024 (UTC)
:::I think for the same reason we don't want governments to be reading our mail and emails. If we checkuser everybody, then nobody has any privacy. ] 20:20, 27 November 2024 (UTC)


I sympathize with Mztourist. The current system is less effective than it needs to be. Ban evading actors , they are dedicated hard-working folk in contentious topic areas. They can make up nearly 10% of new extendedconfirmed actors some years and the quicker an actor becomes EC the more likely they are to be blocked later for ban evasion. Their presence splits the community into two classes, the sanctionable and the unsanctionable with completely different payoff matrices. This has many consequences in contentious topic areas and significantly impacts the dynamics. The current rules are probably not good rules. Other systems have things like a 'commitment to authenticity' and actively search for ban evasion. It's tempting to burn it all down and start again, but with what? Having said that, the SPI folks do a great job. The average time from being granted extendedconfirmed to being blocked for ban evasion seems to be going down. ] (]) 18:28, 22 November 2024 (UTC)
:It's better than having a "See also" section loaded with people from the area. I tend to agree that the wider the area, the less useful a list is. At the province/state/similar wide level, it's not very useful. At the city or school level, it becomes more useful.
:My bigger concern is the reliability of the information. A lot of the lists I see are unsourced; the subject's article may repeat the claim that he's from the area/school, but there's no source in that article either. I agree that the lists need tended, but I don't think they should be removed on sight. —''']''' (]) 22:21, 9 January 2010 (UTC)
:Yeah, I don't see much of a problem with sourced "Notable residents" lists. <span style="border:1px solid #f57900;padding:1px;"><font style="color:#8f5902">]</font> ] </span> 04:16, 10 January 2010 (UTC)
:'''Oppose''' Many cities/geographic areas attach to famous past residents (you can look at how many states in the Midwest claim Lincoln as "theirs" as an example) as a way of demonstrating their own importance. I'm not sure how these sections are "abused," but from my experience, they are generally well maintained by those interested in the given place. You may not believe that famous residents make a place "better", but many of those places do. And they trumpet this information in news articles, on city websites, fatestivals, etc. All information should be sourced; but there is no need to throw the baby out with the bathwater. In any case, they serve an encyclopedic purpose and should stay, in my opinion. ] (]) 04:55, 13 January 2010 (UTC)
:'''Oppose''' as well. This was discussed (and rejected) last year as well: ]. Whether it should be irrelevant or not is not the point: neither the locations nor the people feel it is irrelevant, with people dwelling on their birthplace and so on in their autobiographies, and cities putting much emphasis on their famous inhabitants (with streetnames, statues, ...). Of course, such sections should not be abused by individual editors, and should be neutral (i.e. not only the ''positive'' inhabitants should be included, but the ''negative'' as well), but such sections should not be removed in general. Individual articles can always decide differently on the talk page of course. ] (]) 10:09, 15 January 2010 (UTC)
:'''Oppose'''. What if someone is looking for a specific person but only knows their first name and nationality? Also, people may come to the article specifically looking for famous people from the region. ] (]) 18:48, 15 January 2010 (UTC)
I am not quite sure what you are proposing here. Were you suggesting that if a person comes from area x, we should delete the category - for example, a category entitled "Category: People from Somerset"? Personally, I rather like that. It is a good way to find out who comes from different parts of one's country, and may help one to locate which famous people are associated with parts of one's own home country that one knows. ] (]) 21:07, 18 January 2010 (UTC)
: I have problems with people being listed as being from an area/school when the person has no article and is just listed with minimal identifying information. I'd like to see similar criteria as for the births and deaths date pages — the person has to have an article with the place substantiated in that article. ]<sub>(])</sub> 12:47, 25 January 2010 (UTC)


:I confess that I am doubtful about that 10% claim. ] (]) 23:43, 22 November 2024 (UTC)
== Removing warnings from one's talk page ==
::{{u|WhatamIdoing}}, me too. I'm doubtful about everything I say because I've noticed that the chance it is slightly to hugely wrong is quite high. The EC numbers are work in progress, but I got distracted. The description "nearly 10% of new extendedconfirmed actors" is a bit misleading, because 'new' doesn't really mean new actors. It means actors that acquired EC for a given year, so newly acquired privileges. They might have registered in previous years. Also, I don't have 100% confidence in the way count EC grants because there are some edge cases, and I'm ignoring sysops. But anyway, the statement was based on . And the statement about a potential relationship between speed of EC acquisition and probability of being blocked is based on . And of course, currently undetected socks are not included, and there will be many. ] (]) 03:39, 23 November 2024 (UTC)

:::I'm not interested in clicking through to a Google file. Here's my back-of-the-envelope calculation: We have something like 120K accounts that would qualify for EXTCONF. Most of these are no longer active, and many stopped editing so long ago that they don't actually have the user right.
(This general category is listed as a perennial, I must disclose first. But activity on this front seems to have stopped two years ago. See ], and its talk page.)
:::Misplaced Pages is almost 24 years old. That makes convenient math: On average, since inception, 5K editors have achieved EXTCONF levels each year.

:::If the 10% estimate is true, then 500 accounts per year – about 10 per week – are being created by banned editors and going undetected long enough for the accounts to make 500 edits and to work in CTOP areas. Do we even have enough ] editors to make it plausible to expect banned editors to bring 500 accounts a year up to EXTCONF levels (plus however many accounts get started but are detected before then)? ] (]) 03:53, 23 November 2024 (UTC)
My proposal is that users's right to remove warnings from their talk pages be limited to warnings older than a set age, such as 1 or 2 or 3 months. That way there should be no concern that warnings would stay permanently on user talk pages.
::::Suit yourself. I'm not interested in what interests other people or back of the envelope calculations. I'm interested in understanding the state of a system over time using evidence-based approaches by extracting data from the system itself. Let the data speak for itself. It has a lot to tell us. Then it is possible to test hypotheses and make evidence-based decisions. ] (]) 04:13, 23 November 2024 (UTC)

::::@], there's a sockmaster in the IPA CTOP who has made more than 100 socks. 500 new XC socks every year doesn't seem that much of a stretch in comparison. -- ] (]) 19:12, 23 November 2024 (UTC)
Simultaneously finding warning users as helpful to Misplaced Pages and its users, and on the other hand allowing warned users to remove warnings at will is self-defeating. Though it doesn't redound to a 'no warning' policy, it burdens the conscientious warners too much. That's because it requires that the latter scour users' talk pages for the history of warnings users have received in order to be sure what warning levels to use, without that exercise's revealing much about each previous warning: did the warned users even object to warnings which they removed, or did they remove warnings simply as attempts to cover their tracks?
:::::More than 100 XC socks? Or more than 100 detected socks, including socks with zero edits?

:::::Making a lot of accounts isn't super unusual, but it's a lot of work to get 100 accounts up to 500+ edits. Making 50,000 edits is a lot, even if it's your full-time job. ] (]) 01:59, 24 November 2024 (UTC)
Therefore, for the warnings that they shouldn't be allowed to remove, warned users should be encouraged to provide their retorts, if they have any, right there, below the warnings—which they're free to do now. Users who are about to issue warnings should in turn be instructed to read those rebuttals, if any, before issuing their warnings.
::::::Lots of users get it done in a couple of days, often through vandal fighting tools. It really is not that many when the edits are mostly mindless. ''']''' - 00:18, 26 November 2024 (UTC)

:::::::But that's kind of my point: "A couple of days", times 100 accounts, means 200–300 days per year. If you work five days per week and 52 weeks per year, that's 260 work days. This might be possible, but it's a full-time job.
Is this a good middle ground? ] (]) 22:27, 17 January 2010 (UTC)
:::::::Since the 30-day limit is something that can't be achieved through effort, I wonder if a sudden change to, say, 6 months would produce a five-month reprieve. ] (]) 02:23, 26 November 2024 (UTC)
*'''Strongest possible oppose''' due to the fact that this would require editors (even admins) to submit to dumb, troll-ish, ] etc. drivel staying on their talkpages. <font color="#A20846">╟─]]►]─╢</font> 22:32, 17 January 2010 (UTC)
::::::::Who says it’s only one at a time? Icewhiz for example has had 4 plus accounts active at a time. ''']''' - 02:25, 26 November 2024 (UTC)
*:But your objection would logically apply to any such messages that are non-templated, doesn't it? ] (]) 22:37, 17 January 2010 (UTC)
::::::::: about ban evasion timelines for some sockmasters in PIA that show how accounts are operated in parallel. Operating multiple accounts concurrently seems to be the norm. ] (]) 04:31, 26 November 2024 (UTC)
*::What does that even mean? All I'm saying is, if ] comes along to my talkpage during a content-dispute and, solely for ], posts {{tl|uw-npov1}}. I'd have to keep it there for months (and what about archiving, anyway, by the way?). Ridiculous notion. <font color="#C4112F">╟─]]►]─╢</font> 22:39, 17 January 2010 (UTC)
::::::::::Imagine that it takes an average of one minute to make a (convincing) edit. That means that 500 edits = 8.33 hours, i.e., more than one full work day.
:::I was going to re-write my reply, but an edit conflict prevented me.
::::::::::Imagine, too, that having reached this point, you actually need to spend some time using your newly EXTCONF account. This, too, takes time.
:::Your tone is very hostile. Would you dial it down, please?
::::::::::If you operate several accounts at once, that means:
:::I'll think about your objection before addressing it again. ] (]) 22:45, 17 January 2010 (UTC)
::::::::::You spend an hour editing from Account1. You spend the next hour editing from Account2. You spend another hour editing from Account3. You spend your fourth hour editing from Account4. Then you take a break for lunch, and come back to edit from Accounts 5 through 8.
:::As I wrote, if you contest a warning, you can write so. Bad faith warnings would be just as removable ''and punishable'' as they currently are, and just as any message that anyone can write on a talk page: including those that are not templated. But warned users would be required to make the case why any warning that falls within the protected period is bad faith or misaddressed or whatever, and thus removable. I think of the editor who committed clear and obvious vandalism, but who currently has the right to remove a warning just as much as does the good faith editor who's falsely accused of something. I don't think they deserve equal benefit of the doubt. ] (]) 22:59, 17 January 2010 (UTC)
::::::::::At the end of the day, you have brought 8 accounts up to 60 edits (12% of the minimum goal). And maybe one of them got blocked, too, which is lost effort. At this rate, it would take you an entire year of full-time work to get 100 EXTCONF accounts, even though you are operating multiple accounts concurrently. Doing 50 edits per day in 10 accounts is not faster than doing 500 edits in 1 account. It's the same amount of work. ] (]) 05:13, 29 November 2024 (UTC)
:::::No, I will not "dial down" my tone, which is not hostile, it merely reflects my feelings towards your absurd and unworkable proposal. <font color="#C4112F">╟─]]►]─╢</font> 08:14, 18 January 2010 (UTC)
:::::::::::Sure it’s an effort, though it doesn’t take a minute an edit. But I’m not sure why I need to imagine something that has happened multiple times already. Icewhiz most recently had like 4-5 EC accounts active, and there are probably several more. Yes, there is an effort there. But also yes, it keeps happening. ''']''' - 15:00, 29 November 2024 (UTC)
*'''Oppose'''. I lived through the era when we required people to retain warning messages. It led to some of the dumbest edit wars I can ever remember (e.g. edit wars about removing warnings about removing warnings). Requiring people to retain warnings they disagree with inflames too many tempers to offset the small gain of making it easier to keep track of vandals. No thanks, let's not try that again. ] (]) 23:06, 17 January 2010 (UTC)
::::::::::::My point is that "4-5 EC accounts" is not "100". ] (]) 19:31, 30 November 2024 (UTC)

:::::::::::::It’s 4-5 at a time for a single sock master. Check the Icewhiz SPI for how many that adds up to over time. ''']''' - 20:16, 30 November 2024 (UTC)
:I'm ''very'' surprised to learn that a 'no remove' rule existed before. Your answer is helpful. ] (]) 23:24, 17 January 2010 (UTC)
::::::::Many of our frequent fliers are already adept at warehousing accounts for months or even years, so a bump in the time period probably won't make much off a difference. Additionally, and without going into detail publicly, there are several methods whereby semi- or even fully-automated editing can be used to get to 500 edits with a minimum of effort, or at least well within script-kid territory. Because so many of those are obvious on inspection some will assume that all of them are, but there are a number of rather subtle cases that have come up over the years and it would be foolish to assume that it isn't ongoing. ] (]) 17:31, 28 November 2024 (UTC)

Also, if we divide the space into contentious vs not-contentious, maybe a one size fits all CU policy doesn't make sense. ] (]) 18:55, 22 November 2024 (UTC)
:Yes, that's interesting, Dragon's flight. I'll also mention, SamEV, that at least as far as anon IP editors (with whom I largely deal), that hiding warning messages doesn't work very well, because it's easy for an established editor to guess they're doing it, and simple to check whether they have. Also, when evaluating a new anon IP, this activity is an early hint that they aren't willing to play by the rules. Regards, ] (]) 19:07, 18 January 2010 (UTC)
::But it would be far more useful if you could look at those (unexpired) warnings on the talk pages, along with whatever objections were expressed by the warned users. ] (]) 09:52, 23 January 2010 (UTC)
Terrible idea. Let's AGF that most new users are here to improve Misplaced Pages instead of damage it. ] (]) 18:33, 22 November 2024 (UTC)
*'''no, no...''' The purpose of a warning is to warn a user of something. If the user removes the warning, that only means that they are sufficiently warned (whatever that means to them). ''Forcing'' them to keep the warning on their page is punitive rather than productive - might as well just create a set of Scarlet Letter templates so we can brand them as undesirables for all the world to see. Not a good idea. --] 23:08, 17 January 2010 (UTC)
:Ban evading actors who employ deception via sockpuppetry in the ] topic area are here to improve Misplaced Pages, from their perspective, rather than damage it. There is no need to use faith. There are statistics. There is a probability that a 'new user' is employing ban evasion. ] (]) 18:46, 22 November 2024 (UTC)

::My initial comment wasn't a direct response to yours, but ] and IPs won't be able to edit in the WP:PIA topic area anyway since they need to be extended confirmed. ] (]) 20:08, 22 November 2024 (UTC)
:Thank you, Ludwigs2. But the point is to make things easier for the editors who do the right thing by issuing warnings (and I'm not claiming that issuing warnings is mandatory). And why should we be so preoccupied with not 'punishing' misbehaving users a little? We shouldn't be blasé about it, but it should not be fatal to measures aiming at improving how Misplaced Pages works. ] (]) 23:24, 17 January 2010 (UTC)
:::Let's not hold up the way PIA handles new users and IPs, in which they are allowed to post to talk pages but then have their talk page post removed if it doesn't fall within very specific parameters, as some sort of model. ] (]) 02:51, 23 November 2024 (UTC)

::Yes, Sam, and the road to Hell ''really is'' paved with good intentions. We punish people where people do harm to the encyclopedia (and usually that 'punishment' merely consists of preventing them from doing further harm, temporarily or permanently). removing material from a user talk page does not in any way harm the encyclopedia, therefore it's not a punishable offense. ]. --] 05:12, 18 January 2010 (UTC)
:::The removal of warnings, especially when done by vandals, do harm the encyclopedia, because as a result too many vandals get weaker warnings than they would otherwise, especially from warners who are not very experienced; which means that too many vandals are free to roam around Misplaced Pages longer than they should. ] (]) 09:52, 23 January 2010 (UTC)
::::If a user repeatedly removes warnings in order to avoid receiving higher level warnings, it's unlikely that this would go unnoticed for very long. I check the talk page of users I've recently warned to see if they've gotten any further warnings, and I'm probably not the only one. If they've been repeatedly removing warnings in the hopes that no one would notice that they haven't stopped their disruptive behavior, someone watching their talk page ''would'' notice. ] 16:00, 23 January 2010 (UTC)
:::::That's the truth (mild pun intended). But that's no help to any warner who's new to the talk page. ] (]) 14:33, 24 January 2010 (UTC)
*This isn't even worth thinking about, to me. Sorry.<br/>— ] (]) 23:42, 17 January 2010 (UTC)

:OK. Anyone can comment, just as anyone can put forth proposals (I think). ] (]) 23:52, 17 January 2010 (UTC)
::What was the purpose of that message, and the tone behind it, SamEV? <font color="#7026DF">╟─]]►]─╢</font> 08:16, 18 January 2010 (UTC)
*'''Request for information''': Is there presently any convenient way to check for a user's warning history other than his or her talk page and its history? If so, the proposal seems almost moot; if not, I can see why such an external data source would be desirable and might be preferable to immutable warnings on the talk page -- assuming a high level of transparency, right of appeal, etc. etc. - Regards, ] (]) 04:04, 18 January 2010 (UTC)
**Under the current understanding, standard warnings are supposed to expire and be forgotten after roughly a month. Editors who have talk pages which are active enough to make checking back a month difficult are unlikely to be getting standard template warnings. persistent vandals (who stay under the 4 warnings per month limit) are a minor annoyance who will eventually get bored if they don't get noticed and blocked. petty vandalism from months or years ago shouldn't count against an editor who is (maybe) trying to edit productively now. I don't even see a reason for an external resource. --] 05:22, 18 January 2010 (UTC)
***If the persistent vandal keeps deleting warnings, then doesn't that put a burden on the person issuing the warning to reconstruct history to see if they've been issued 4 warnings per month? Isn't that the argument made in the paragraph beginning "Simultaneously finding warning users as helpful..." in the proposal above, or am I misunderstanding it? - ] (]) 05:46, 18 January 2010 (UTC)
****The thing is, "warnings" themselves are effectively meaningless in reality. Their designed more as a courtesy/civility tool in order to prevent people from honestly being taken by surprise when it comes to administrative action. In the case of purposeful vandalism, they normally do more harm then good in that they provide the vandal with the attention and feedback that they crave by vandalizing, but in the end I think that we've (correctly) chosen to live with that drawback in order to prevent "damage" to the (optimistically) 1-2 out of 1000 editors who are mistakenly labled as vandals due to some mistake/misunderstanding (normally, in my experience, caused by language issues). It's generally better to err on the side of caution with things like this, after all. Personally, I'd think that some sort of proposal to "police the policeman", a process to review the use of warning templates and "vandal fighting", would receive more support and possibly even be worth doing, but then I'm somewhat predisposed to think that way....<br/>— ] (]) 06:36, 18 January 2010 (UTC)
****When I patrol, my usual routine is to revert, then go to the Users talk page to leave a warning. When I get there, if the 'discussion' tab is redlinked I leave a level 1 warning and move on. if the 'discussion' tab is blue (meaning that the the page was created but is currently blank) I click the history tab and glance at recent activity, leaving a warning at a level that seems appropriate or sending the user straight to the admins if there's a lot of recently deleted templates. I leave any worrying about correctness to admins (vandal patrollers are traffic cops; admins are the judges). it really doesn't take much time or thought. --] 07:09, 18 January 2010 (UTC)
:I'm not sure what benefit there would be in this change, and I can see several new areas of contention that it would open up:
#It would muck up a lot of people's archiving
#Incorrect warnings would become a lot more contentious. For example I frequently move new articles to correct their capitalisation and as a result I sometimes get the "warning" when the article is tagged for speedy deletion.
#Sometimes the boundary between warning someone and informing them that you don't think their article meets our notability criteria can get a tad grey. {{tl|G3}} and {{tl|G10}} will almost always result in warnings, but several of the other speedy criteria currently cover a range of good and bad faith articles, if we start differentiating between notifications that one can remove and warnings that we can't then New Page Patrol suddenly gets even more overcomplex.
#We have a philosophy that anyone can start editing here without learning our ways, if we want to make warnings "sticky" then that is another thing that we have to communicate to newbies.
#Some IPs are dynamic, others may be shared. The person who deletes a warning from a fortnight ago may be doing so because they have taken over that IP, and if so they might baulk at being told to reinstate a warning that they consider was given to someone else.
#For the last four reasons I predict that were we to do this, the result would be a troll feeding frenzy.
:PS For what its worth, when I block editors I don't just look at their current talkpage and I suspect most if not all admins have a similar MO. '']]<span style="color:DarkOrange">Chequers''</span> 18:05, 19 January 2010 (UTC)
**PhilipR, you understood well: I propose that we make warnings more effective and cease burdening warners so much. ] (]) 09:52, 23 January 2010 (UTC)
**ϢereSpielChequers, most of those seem like good arguments to me, for now at least. Not the first one, though. That potential problem can be avoided (i.e. other than by not adopting this proposal — and it ''might'' indeed be rejected...) by doing as ] recommends: "Warnings should be grouped by date under the heading "Warnings"." So my proposal would pose no trouble for archiving, either by humans or bots, as warnings would be found in one section, with the older warnings at the top of it. ] (]) 09:52, 23 January 2010 (UTC)
*:::::Firstly, I haven't ever heard of that rule, and am a vastly experienced editor (] doesn't follow it either). Secondly, it's not a rule. It's not a policy. It's not even a guideline. I don't know where it came from, but it has no standing whatsoever! <font color="#C4112F">╟─]]►]─╢</font> 16:24, 28 January 2010 (UTC)
*::::::Heya TT, I'm not sure what the history is here with you, or between you and SamEV, or whatever. I just though that I should mention that your own tone in this discussion has been fairly strident right from the get-go. It'd be nice if you could back off a bit here, as I don't see how continuing with this open animosity here in front of everyone is really helpful. You could always take it to his talk page, if you think that it's really important.<br/>— ] (]) 16:45, 28 January 2010 (UTC)
*:::::::Thanks, Ohms law.
*:::::::I can't recall having ever interacted with user Treasury Tag, or to have even seen his name anywhere. ] (]) 21:04, 29 January 2010 (UTC)
***That assumes warnings are issued in accordance with that guideline and I doubt they are - most are simply added to the end of a talkpage. But some active users have to archive on quite a frequent basis simply to keep their talkpages editable. Not all of them would be able to increase their archiving interval to the number of months that you want these warnings to stick for, and a bot that had different archiving intervals for warnings and other threads would be overly complex and risk hiding warnings away from other relevant contemporary threads. '']]<span style="color:DarkOrange">Chequers''</span> 16:21, 28 January 2010 (UTC)
****Warners should be made more aware of that guideline, despite the fate of this proposal. When I used to warn more, I did use that "Warnings" header. And btw, I don't see why bots that currently do cleanup or other tasks couldn't be programmed to create that heading and gather the warnings under it. For example, SineBot could be programmed to gather them when it leaves a message on a user page. (I name SineBot just as a blind example.)<br>
:::I don't understand what you mean by "hiding warnings away", though. I take your word for it re: complexity, since I'm not versed in programming. :(
:::] (]) 21:04, 29 January 2010 (UTC)
*'''Oppose''' most of the reasons are listed above. As to the concern about looking for removed warnings, this is one reason that warnings should include an edit summary. If the edit summary includes "Level 3", "Level 4" or "Final warning" it's easy to get an idea of what's gone on before with a quick scan of the talk history. {{unsigned|Cube lurker|18:14, 19 January 2010}}
**Many, maybe most, warners don't leave those edit summaries. Besides, what do you learn just by looking at the edit summaries? What if the warnings were undeserved? You wouldn't know it just from the edit summaries. You'd have to look at diffs, one by one, to see what was said about each warning, if anything. Per my proposal, you'd get a much better idea of which warnings were merited and which not, ''and'' you'd know it faster, because warned users would explain themselves on their talk pages and those comments would remain visible as long as the warnings themselves. The reason warners would usually explain themselves is because simply removing the warnings would no longer be an option.
**Maybe one day a technical way can be found to add warnings which cannot be removed by users, but by admins and/or automatically, once they've expired, by the software/bots. ] (]) 09:52, 23 January 2010 (UTC)
**:If the warnings were undeserved, then they most certainly should not be forced to stay on the talk page. A lot of people uses automated tools like Twinkle for warning users, which indicates the warning level in the edit summary. Looking at the page history gives a pretty good indication of what sort of warnings the user has received in the past. ] 16:00, 23 January 2010 (UTC)
*'''Oppose''' There are some wizards who are not good, Harry ... oops. Some folks have been known to give out toally unwarranted warnings. Frinstance, folks who give out 3RR warnings and final warnings after a single edit on a page. Absent a real need to alter current policy, let's keep what ain't broke. ] (]) 20:39, 23 January 2010 (UTC)
**Reach Out to the Truth, and Collect: I concede on the autosummary point. But with respect to undeserved warnings: As I said above, bad faith warnings would continue to be punishable, and admins could remove them. In any case, I think it would be preferable for an undeserved warning to stay, with the warnee's objections, instead of a deserved warning's being removed with no justification even attempted. ] (]) 14:33, 24 January 2010 (UTC)
*:::"I think it would be preferable for an undeserved warning to stay" – *groan* <font color="#7026DF">╟─]]►]─╢</font> 14:37, 24 January 2010 (UTC)

*''Hand up''* May I ask a question? When the proposer was a hall monitor in junior high a couple years ago, did he ask the principal to make those who had been admonished for not having a pass wear a piece of paper recording how many times the person had been previously admonished so the hall monitor could calibrate his degree of sternness when he caught them again? ] (]) 14:57, 24 January 2010 (UTC)
**All the time. ] (]) 16:05, 24 January 2010 (UTC)
***Ha. I am disarmed by your charm (seriously), perhaps enough to support the keeping of block notices by administrators, but have been here enough years to have run into people whose "warnings" on my talk page didn't merit reading, let alone prolonged residence. ] (]) 01:25, 25 January 2010 (UTC)
****I'll take it! That's the closest to a "Support" vote I've got so far. :) ] (]) 20:16, 25 January 2010 (UTC)
*'''Weak oppose'''. I'm as tired as the next guy of IPs getting away with murder by blanking their userpages month after month, but the cons of this proposal (trolls having one more policy to point to, people putting up with garbage, and the historical record pointed to above) simply outweighs the possible good. If no one is watching these IPs blank their pages now and keeping track of them, no one will notice if this goes through. --] 20:41, 25 January 2010 (UTC)

*'''Oppose''' We've got better things to piss our time away on (such as an encyclopedia) than in trying to gauge whether a warning template can or cannot be removed. We don't need another level of bureaucracy. ] <span style="color: #999;">// ] // ] //</span> 20:50, 25 January 2010 (UTC)

== Notability for genes ==

Consider our article ]. A ], barring a research breakthrough. Written entirely by bots, copying info from other gene databases that are freely available online. Cites only primary sources and databases that appear to be automatically generated from them and each other. Doesn't provide any information useful to laypeople, nor to anyone else who isn't likely to use those other databases. Doesn't establish the gene's importance.

We've got thousands of these articles (have a look at ]), and can expect thousands more as the labs continue to churn out results. Until I saw ], I thought the term "sciencecruft" was just empty rhetoric.

To limit the amount of space dedicated to genes such as these on Misplaced Pages, and the total number of permastubs, I suggest that we develop some guidelines to determine which genes are notable, something along the lines of ], and a bot to either merge the rest into table articles (or another similarly compact form) or transwiki them elsewhere. ]] 03:41, 18 January 2010 (UTC)
:I can see a valid case for stating that all genes have ''de facto'' notability which merit their inclusion in Misplaced Pages so I would disagree with the description of articles like ] as "sciencecruft". However that inclusion doesn't necessarily mean that each gene simply must have its own isolated article. If merging several smaller permastub into a central main article will provide more context to the reader then I think that is the most ideal situation. Notable topics can still be covered in Misplaced Pages as part of broader focused articles. I sincerely doubt that the "ego" of CHUK gene would be bruised if this stub was merged into a large article about related genes. :P ]]/] 03:59, 18 January 2010 (UTC)
:: A quick search of ] yields the following review articles for CHUK (also known as IKK-α): {{PMID|17047224}}, {{PMID|18818691}}, and {{PMID|19085841}}. Just reading the abstracts it is clear that CHUK/IKK-α in addition to being a subunit of ] has functions quite independent of the other two subunits (IKK-β and -γ) that comprise IκB kinase. Hence the research breakthrough that was asked for above has already occurred and is well documented in the literature. I am presently working to expand the article to make it more accessible to a general audience and also to firmly establish its notability. Finally I wanted to point out, that this article was created as part of the ] project whose mission statement is to "''create seed articles for every ] human gene''". The criteria for ] to create an article was a certain minimum number of citations in PubMed. It is not guaranteed that each and every one of these articles is notable, but the citation criteria insures that a large majority are. Cheers. ] (]) 19:57, 18 January 2010 (UTC)
:::I'd support one article per gene unless there's a ''really'' good reason to group them. The issue is that the genes we know less about are harder to group, and more likely to be grouped wrongly. Having at least a stub article per gene means that as more is known the information will be added to systematically named and formatted articles. ]<sub>(])</sub> 15:14, 19 January 2010 (UTC)
::::Since a gene article is only created if a gene has several reliable sources that discuss it as their main subject, this fits well with the existing notability guideline. We we even have secondary sources, which are the database entries (eg the entry for ), which discuss and summarise the findings of the primary sources. Merging is very difficult, for example CHUK has two independent functions, so which function would you merge into? ] (]) 21:02, 20 January 2010 (UTC)
'''Strenuous oppose:''' Deleting or forbidding these would actually ''not'' make more room for TV episodes, albums, video game characters, and Pokemon cards, so what's the point? ] (]) 14:50, 24 January 2010 (UTC)

== Policy proposal to restrict editing powers of administrators on articles where they have undertaken administrative action (either on the article or other editors) ==

'''The proposal''': An admin who has undertaken admin procedures on a particular wikipedia article page (say, protecting the page, blocking a user, warning a user) should not be allowed to edit on the same article page for a pre-defined period (for example, 3 days/a week/ten days; that is, a pre-defined period decided by consensus here). The only editing allowed to such an administrator on the said article, would be to revert clear vandalism (and to get involved in talk page discussions). The administrators can continue to engage in other administrative action on the specific article and on the users involved. If the administrator really wishes to edit normally on the article, he/she should either wait for 3 days/a week (or any other pre-defined period decided here) or rather, should suggest the change to other administrators/editors who could undertake the action with a more neutral pov.

The '''whys''' of the current proposal:
*Reason 1 - An admin who might have, for example, locked a page for some time, blocked an editor or warned an editor about an impending block (due to the editor's tendentious editing/vandalism) has a possibility of becoming seemingly 'attached' with/to the article; in other words, apparently taking ownership of the article, leading to forceful (or probable non-NPOV) editing, that might not take into clear account the talk page discussions that would have taken place within the article over a length of time.
*Reason 2 - It'll create more transparency into the administrative actions, in general.
*Reason 3 - Other editors, after seeing an admin's administrative action on some particular user/or on the page, and noting that the admin is continuing editing on the page, might not be 'bold' in their editing actions and might accept changes without much discussion.

The '''benefits''' of this proposal:
*It clarifies one policy grey area of Administrative action.
*Administrators would feel less worried about getting caught in a misjudgement of action.
*It would allow administrators to justify their edits to other editors (who might be given to questioning the same).
*Clear cases of vandalism can still be reverted/edited by the administrator, so it does not shackle the administrators at all with respect to powers.

The '''drawbacks''' of this proposal:
*Admins might be forced to stop editing on pages that they might have been involved in over a long term; and that can take the sheen off the tempo.
*It'll give long term disruptive editors much more leeway to engage in tendentious editing during the time the involved administrator is absent.

Past '''similar proposals''' which have failed consensus:
*]: This past proposal was completely focused on disallowing administrative action on users, with whom the administrators had been engaged in content disputes. The current new proposal does not talk about restricting administrative powers at all; but only focuses on editing powers of administrators.

] ] 04:10, 19 January 2010 (UTC)

Discussions could be held from here:

:So if I revert and block some kid who scribbles "u poopface" in an article, I'd be prohibited from editing it? Seriously? Often it's when we revert vandalism on an article that we get caught up reading it, and then edit and improve. That's one of the fun things about Misplaced Pages. <small>(Hey, that's the first time I ever got to write "u poopface"!)</small>
:Also, I often block vandals spray-painting graffiti on articles on my watchlist -- out-of-the-way topics for which typically I am the main, or only author. This new rule would prevent people like me from making further improvements to these articles. It would be a waste of time to try to find another administrator to make blocks or protects for me (in fact, if this were implemented, I'd just ], and continue cleaning up the messes and improving the articles, and no one would ever notice). Maybe you are thinking of high-traffic articles on hundreds of watchlists? ] ] 04:27, 19 January 2010 (UTC)
:We work it the other way around with ], which permits an admin to edit an article, but not take admin actions on it of a controversial nature. ''']''' <sup>]</sup> 04:28, 19 January 2010 (UTC)
:There is no need whatsoever for such a policy. As noted by others, ], while the community already reacts strongly to administrators who use the tools despite being involved. Frankly, I find this to be a solution in search of a problem. ]] 04:34, 19 January 2010 (UTC)
::I was thinking something along the same lines: Is this a problem? I understand how it's ''possible'' for someone to come in as an uninvolved admin, issue a few blocks, and then become involved. But does it happen? And if so, does it happen in such a way that the person who issued the blocks (or whatever the action was) retains that "administrative" air, as a now-involved editor? ] (]) 04:39, 19 January 2010 (UTC)

Warning a user is not an administrative action. Beyond that, administrative involvement is about content disputes, not all editing is a content dispute. An administrator must never use their tools to gain an advantage in a content dispute, that is where the line is drawn. Uncontroversial actions like blocking vandals are not an issue. ]<small> <sup>(Need help? ])</sup></small> 04:36, 19 January 2010 (UTC)

This proposal ] and would just be another way to "get an admin in trouble." Now we would have to remember how long out "timer" is on every article we've preformed an admin action on or... what happens then? The admin gets blocked for making a constructive edit because they didn't wait long enough to make it? No thank you. ] (]) 04:44, 19 January 2010 (UTC)
*It does seem as though we really need to do something about cleaning up the behavioral guidelines on content disputes. I don't at all think that this is limited to administrator behavior, but the simple fact is that our content dispute policies/guidelines and dispute resolution processes are simply piss poor at present.<br/>— ] (]) 04:44, 19 January 2010 (UTC)

*While I am certain the proposal was ''made'' in good faith, I do think that such a policy is actually contrary to the spirit of ] in that it presumes admins to be incapable of exercising appropriate judgment in the absence of explicit rules. Furthermore, it would most certainly have a negative impact on the project in because it would prevent admins - who are after all merely editors with extra responsibilities - from making many necessary and beneficial edits. (As with Antandrus, I end up watchlisting, following, and subsequently editing many of the articles where I've had to act in an administrative capacity.) --''']'''''<small><sup>]</sup><sub>]</sub></small>'' 04:49, 19 January 2010 (UTC)

*This policy is a good idea and necessary. It will prevent ill-feelings, and avoid the unnecessary appearance of impropriety. Once, while participating in a heated dispute about a page, I saw the page locked down by an administrator, who then continued to make edits to the page. The edits were minor and mostly uncontroversial, but they created bad feeling in the other group and a sense that they were preemptively sidelined. This was totally unnecesary. It was as if the administrator had said, I'm going to lock you out, and keep on working on my version, and you can't do anything about it. My version is the one that's right, its decided. The fact is, minor edits can wait until the dispute is over, admins should not have special exemption, especially if they were the ones that locked the page. In answer to Antandrus' argument about vandalism, first, this policy isn't meant to apply to non-controversial administrative actions like reverting obvious vandalism, also, if you revert a page and protect it, and/or block a user for vandalism, do you really ''need'' to edit that same page? I think doing so when the edit is not obviously vandalism will lead to unnecessary ill-feelings. ] (]) 05:55, 19 January 2010 (UTC)
:Lawrence, unless I'm mistaken, this proposal doesn't address the problem you've described. There are already strict measures in place to prevent admins from locking a page and then using their access to continue editing. This proposal would prevent an admin from editing even when ''all other editors'' were able to do so. I'm not sure if that affects your position regarding the proposal. --''']'''''<small><sup>]</sup><sub>]</sub></small>'' 06:24, 19 January 2010 (UTC)

This is a good proposal since it is meant specifically to the abuse by adminstrators like ]. Please see the details at . Most of the edits later on become vindictive and the adminstrator gets attached to the whole discussion and looses objectivity.
] (]) 11:02, 20 January 2010 (UTC)

*<small>someone apparently reverted my edit to the absolutely ridiculous section title, which is fine, I'm certainly not going to try changing it again. It would be helpful if the author of this section or whoever it was who reverted back to the paragraph as a section title would come back and change it to something more reasonable. Thanks.</small><br/>— ] (]) 06:47, 19 January 2010 (UTC)

* I would oppose this policy. The way it is worded would mean (as others have pointed out, and indeed as I pointed out to the submitter on their talk page a month ago) that admins couldn't edit articles on which they have done anti-vandal work (e.g. semi protection). And if it were re-worded to exclude such actions, then it wouldn't be necessary anyway. If an admin fully protects a page and continues to edit it, then they would be in breach of the page protection policy. Either way, this policy is not needed, and would hamper admins in their main work: beign editors. -- ''''']'''''/]&#124;]\ 08:41, 19 January 2010 (UTC)
*I agree with several other commenters that this is an unnecessary and counterproductive prohibition. The problematic case is not for an admin to take some administrative action related to a page and then start editing it (excepting page protection, which is already covered). If an admin has closed an AFD, blocked a vandal, etc., and notices in the process of doing so that the article could use some sprucing up, that's a ''good'' thing. The problematic case is when admins who are already active editors on a page use their admin tools to further an editorial agenda, which is already addressed in ]. --] (]) 17:20, 19 January 2010 (UTC)
*Hate to pile on, but I have to agree with the general sentiment here. I thought admins were already trying to avoid using the tools in relation to ''vested'' editing, or content disputes. As long as they're blocking vandals, and not people who disagree with them over editorial matters, I don't mind any admins editing articles they've policed. --] 17:45, 19 January 2010 (UTC)
*Oppose - unnecessary creep. And for my own part, I rarely edit articles unless I land on them for some other reason. So I might show up to fulfill a semi-prot request for an article, read through it, and notice some slight copy-editing to be done. So I'd be prevented from doing that? No thanks... This policy will either be too broad and prevent good faith edits, or so specific as to be collapse under its own weight. –<font face="verdana" color="black">]</font>] 17:49, 19 January 2010 (UTC)

Comment: it seems clear that this proposal isn't going to fly. If this section is to serve any further use, it might be for a wider discussion about the perceived problem. ] <sup>]</sup> 18:11, 19 January 2010 (UTC)
*Oppose, I don't see how preventing a bunch of good faith edits by admins is going to improve the pedia. Like others above I'm easily distracted from admin stuff by the opportunity to do a gnomish edit, if you don't want me fixing typos and dab links can you explain why such edits are a problem? '']]<span style="color:DarkOrange">Chequers''</span> 18:18, 19 January 2010 (UTC)
*Although I am the one who proposed the suggested policy, I should say that I have to agree with many of the points that have been mentioned above by administrators/editors opposing the policy change. At this juncture, I therefore suggest that ] be changed appropriately in due course to reflect a policy that leaves less grey area in defining when an Administrator will be considered uninvolved with respect to editing on an article where he/she has undertaken an administrative action. ] ] 18:50, 19 January 2010 (UTC)

:While you are welcome to propose a change on that policies talk page, I don't think it is a grey area at all. I think the criteria for involvement is well defined. ]<small> <sup>(Need help? ])</sup></small> 18:52, 19 January 2010 (UTC)

:You still seem to have it somewhat backward, as MBisanz mentioned above. Becoming "involved" with an article is a result of ''editing'' the article and prevents them from taking admin actions on it. <span style="font-family:Broadway">]]</span> 19:47, 19 January 2010 (UTC)
:Thing is it depends on what is going on. There are areas where the amount of time taken to resolve in an admin's mind what the content situation is behind a problematic behaviour then makes them an "instant" expert on some tiny area - usually meaning they can see both sides and propose a neutral wording. Sometimes the admin is mediating, sometimes they just see an unfelicitous wording. Anyway, we could discuss changes to ] but we should avoid ]. '']&nbsp;]'', 20:42, 19 January 2010 (UTC).
:*'''Changes suggested''' > I've suggested the subsequent changes to ] ]. If you might wish, kindly put in your viewpoints there. Thanks ] ] 04:50, 20 January 2010 (UTC)

*Bad idea, if I revert and block vandals, this shouldn't prevent me from improving the article. For example, just because I blocked , does this mean I should be limited in any way in editing the article on ]? ] (]) 21:11, 20 January 2010 (UTC)

*'''Oppose'''. Many administrators first encounter articles as a result of administrative action, and become involved in editing after the encounter. The proposer has gotten it backwards. It is more sensible for an admin to refrain from using administrator powers ''after'' being regularly involved in editing it, except in certain egregious situations where admin involvment is required and obvious. Admins exercise such restraint already, so there is no need for the proposal. ~] <small>(])</small> 18:13, 26 January 2010 (UTC)

== Watchlists and maintenance ==

So, as I understand it, the biggest issue people have with reliability of Misplaced Pages is vandalism. A lot of vandalism is never caught because a lot of pages haven't been watchlisted. IIR, the median time for vandalism to be caught is a few minutes, but the mean time is like two weeks. This is a result of vandalism on high traffic pages being caught immediately by bot or watchlister, as opposed to to low traffic pages being ignored for months. It seems to me that this issue could be addressed by having some way of counting the number of (active) editors watching pages and creating an "orphanage" to identify pages with few or no editors watching them. I don't think this would greatly increase the amount of work required to maintain a watchlist as the pages that would be getting watchlisted that aren't already are very infrequently editted, so the edits to check would be small in number. Is this a possibility? ] (] | ]) 06:28, 19 January 2010 (UTC)
:Someone already beat you to that idea: ] and . It just hasn't been implemented in the software yet. --] ] 07:32, 19 January 2010 (UTC)
::Aw. Here I was thinkin I was being helpful. ] (] | ]) 07:50, 19 January 2010 (UTC)
:::The list of unwatched pages (that is, pages that are not on a user's "watchlist") is kept secret for obvious reasons. There is ] that can be used to see how many people watchlist a page, but the toolserver admins restricted public visibility of how many watchers to 30 or greater. That is, if there's a dash in the "watchers" column, it means there are less than 30 users who have that on their watchlist. Only a small list of people outside of toolserver admins can see unrestricted data using this tool. Lately there's been much talk about using this data, but ]. ] (]) 08:41, 19 January 2010 (UTC)
::::95% of vandalism comes from anons... we could reveal that data only to autoconfirmed users? ] (] | ]) 09:01, 19 January 2010 (UTC)
:::::Cue "spy"/"wolf in sheep's clothing" user/vandal who edits properly enough to become autoconfirmed and then publicizes off-wiki every unwatched page they can find. --] ] 09:56, 19 January 2010 (UTC)
::::::Are there really people that determined to vandalize? It seems like you think there's a Guild of Vandals out there tapping their fingers while sitting on a throne made of malicious edits. When I tell people I edit the Wiki, they usually think it's pretty lame... Anyway, people can ''already'' write whatever they want on unwatched pages. If we knew which pages were unwatched, the number of unwatched pages would be less and the total number of vulnerable pages would decrease. The system would be more deterministic. ] (] | ]) 10:06, 19 January 2010 (UTC)
:::::::Yes, there are. And BTW autoconfirmed status requires only 4 days and 10 edits. Raw watchlist numbers are too dangerous for widespread use. ] <sup>]</sup> 11:04, 19 January 2010 (UTC)
::::::::I don't think any of this would make Misplaced Pages more vulernable to vandalism than it already is, but whatever, I'll let the experts debate about it. ] (] | ]) 14:12, 19 January 2010 (UTC)
::::I wonder if it could be added as a privilege like rollbacking? Which could very easily be taken away if the user starts to vandalize. ] (]) 14:54, 19 January 2010 (UTC)
::::: We can safely assume that any user who gets the privilege would not vandalize with the same account, or would vandalize having already saved (and possibly published) a large list of target pages. ]<sub>(])</sub> 15:21, 19 January 2010 (UTC)
::::::I have never seen anyone so determined to grief or vandalize. Do you have any examples of someone working that hard to disrupt the encyclopedia? ] (] | ]) 01:35, 20 January 2010 (UTC)
:::::::Sure, just peruse the pages at ]. –<font face="verdana" color="black">]</font>] 01:38, 20 January 2010 (UTC)
::::::::It looks like all those people are POV pushing specific topic articles. They aren't sniffing around for unwatched pages so they can vandalize them. These are not the people that would be targetting the list of unwatched pages for vandalism. ] (] | ]) 01:43, 20 January 2010 (UTC)
:::::::::There aren't many examples of this specific M.O. because typically the data has not been available. If you don't think a vandal would seek a low-profile page to vandalize were the data available, you clearly underestimate the garden-variety vandal. –<font face="verdana" color="black">]</font>] 01:47, 20 January 2010 (UTC)
::::::::::What I find hard to believe is that you would have someone SO determined to vandalize, that they would make X good edits over Y weeks to make an account capable of accessing the list of unwtached pages so that they could then find an unwatched page and then add "penis" to it. That's not how ''random'' vandalism works. The idea with the unwatched list is that you would cut down on the random vandalism. Sure you ''can'' beat the system. You ''can'' beat the system we currently have. I just don't think anyone would be willing to put in the effort to vandalize random pages. ] (] | ]) 01:54, 20 January 2010 (UTC)
:::::::::::Rollback isn't so hard to get and I worry more they would add sneaky vandalism (stuff than RCP and hugglers don't pick up on) than a simple penis or whale's vagina. –<font face="verdana" color="black">]</font>] 01:57, 20 January 2010 (UTC)
::::::::::::They can already do that, though. ] (] | ]) 02:01, 20 January 2010 (UTC)
:::::::::::::But they lack a list of targets they are sure no interested parties (who would notice an edit that looked legitimate but wasn't) are watching. –<font face="verdana" color="black">]</font>] 02:05, 20 January 2010 (UTC)
::::::::::::::So this vandal that you are worried about will make so many good edits, file a request for access to the unwatched list, and then make sneaky malicious edits that no one will really notice on extremely low traffic pages. It's probably safe to say that these people will be in the extreme minority, if any exist at all. I think the potential benefits of benevolent editors having access to the unwatched list well exceeds the potential (and IMO unbelievable) harms. ] (] | ]) 02:16, 20 January 2010 (UTC)
{{outdent}} I have to agree with AzureFury -- the likelihood of anyone doing this is so small as to be inconsequential. ] (]) 03:06, 20 January 2010 (UTC)
*One (probably pie-in-the-sky) possibility is to somehow use the toolserver data to create a ] style link/list (for example, see the link I posted at ]). I guess that access to the underlying list itself would need to be restricted somehow, but something like that ought to be doable. We really should also push for some sort of implementation which causes users who have not logged in for X number of days to be considered "inactive", thereby having their watchlists ignored, as well.<br/>— ] (]) 21:53, 19 January 2010 (UTC)
**You're pretty much rehashing the existing bug comments. --] ] 02:07, 20 January 2010 (UTC)
***Am I? Well, it never hurts to reiterate support, within reason of course. <small>That, and I completely gave up on anything coming from bugzilla quite some time ago. This isn't the place for that sort of conversation, though.</small><br/>— ] (]) 02:59, 20 January 2010 (UTC)
*as an admin I'd like to have such a list available-- like I think most admins I add sensitive pages I come across to my watchlist, and this way I could spot others that need it, and also avoid duplicate watching that is already being done by several others.--if 15 people are already doing it, they don;t need me; if only 2, they might. ''']''' (]) 02:06, 20 January 2010 (UTC)
**Personally I have doubts even about allowing admins unlimited access to that raw data. It would only take one rogue admin (or compromised admin account) for a list of unwatched pages to get published somewhere: and that list would be too large, I think, for the problem to be easily dealt with after the event. ] <sup>]</sup> 14:44, 20 January 2010 (UTC)
:::There are so many problems with this scenario. A rogue admin or compromised account is ''already problematic'' so this would be no change. It's ironic that you should bring this up as there are currently a bunch of admins calling for sanctions against you for your use of the PROD tool. The unwatched list would hopefully be changing over time so if one person accessed and published it, after some amount of time, the pages that were published would no longer be unwatched. Further, let's suppose the unwatched list is published. Who will see it? How many people are going to visit a site to see the list of unwatched pages on Misplaced Pages? What fraction of those people are going to be vandals? Again, any system we can come up with is beatable if someone is determined enough. ''The system we currently have is beatable, and is beaten constantly.'' The idea with the unwatched list is to give benevolent editors the advantage and make it more difficult to vandalize. ] (] | ]) 16:30, 20 January 2010 (UTC)
::::"sanctions against you for your use of the PROD tool." - what? you're either talking about someone else or using the plural in a rather confusing way. Anyway, your rhetorical questions basically amount to "how bad can it be?" I'm telling you, it can be ''bad''. There are some really determined vandals out there, and this would be a godsend to them - and trust me, if it was released, they would find it. The risks are too high. PS "A rogue admin or compromised account is already problematic" is of course true; except most of the problems can be handled by desysopping and undoing their actions; whereas a leak of unwatched data would be a lot harder to deal with. ] <sup>]</sup> 16:47, 20 January 2010 (UTC)
:::::You've got a similar username as a certain other, currently high profile, administrator, who just last night put himself into the limelight. So, yea, AzureFury was thinking of someone else.<br/>— ] (]) 21:20, 20 January 2010 (UTC)
::::::Oops, my bad. I was randomly browsing contributions and I viewed a huge discussion on deletion of unsourced BLP's. Anyway, I was discussing this earlier. It's true that there are determined vandals, but they're determined to vandalize ''specific pages''. Do you believe that someone would put in the effort to vandalize ''random'' pages? Do we have any examples of this? ] (] | ]) 01:50, 21 January 2010 (UTC)
::::::See ] (most famously ]) and ]. ] <sup>]</sup> 02:41, 21 January 2010 (UTC)
:::::::Like I said, most of the long-term abusers are people vandalizing specific articles. Willy is an interesting example...did he ever have to make legitimate edits in order to make those changes, such as moving pages, etc? ] (] | ]) 02:56, 21 January 2010 (UTC)
::::::::Yes, once "autoconfirmed" was implemented, Willy had to start creating "sleeper" accounts, ie. letting an account sit unused for a long time, do just enough edits to get autoconfirmed, then start vandalizing. I seem to recall one spot where he had several of these sleepers at once, and it took a while to get things under control again.
::::::::Also, if you don't think anyone would be that determined to vandalize, you've never visited ] or ]. &mdash; <b>]</span>:<sup>]</sup></b> 00:44, 25 January 2010 (UTC)

:{{tick|18}} '''{{ucfirst:Already exists}}''' - Seems you guys are talking about ]. But since the list of unwatched pages is sensitive data that special page can only be viewed by admins. If you are an admin, then ] lists a whole bunch of extra pages that only admins can use. (Some of them would be very useful for a vandal...)
:But currently many of the special pages aren't updated, and Special:UnwatchedPages is one of them. From what I have seen in discussions elsewhere the reason might be this: When some servers crash the rest of the servers get overloaded. As a quick fix the Wikimedia sys-admins (the people that manage the servers) then often disable such special pages to save some server cycles, since those special pages do pretty heavy database runs each time they are updated. Unfortunately the sys-admins tend to forget to enable the special pages again once all the servers are up and running again. Sometimes they also disable a special page since they have done some system change that breaks the special page, but they haven't gotten around to update the code for the special page.
:Oh, and the idea to create ] seems to be a perennial proposal here at the Village pumps. But as I said, it already exists. By the way, ] has a number of requests for adding more features to Special:UnwatchedPages.
:--] (]) 03:02, 25 January 2010 (UTC)

::It's a perennial proposal that is always accepted and never implemented, heh. ] (] | ]) 10:45, 26 January 2010 (UTC)

::I was aware of that page's existence, but I've never seen anything listed. From what I gather from the talk page, it seems to be down more than up over the last few years. Correct me if my deduction is incorrect. ] (]) 04:56, 25 January 2010 (UTC)

:::I have seen it working some months ago. But yeah, now that you mention it, it seems to mostly be down. I have seen several comments at some bugzilla requests that seem to say that ] is disabled at the bigger Wikipedias for performance reasons, but that it is still up and running on the smaller Wikipedias.
:::Come to think of it, this kind of service usually is better handled by the people on the toolserver, so we should probably ask them to make a similar service. Of course, it would have to be limited so only admins and other trusted users can see it. I think that we need more than just the admins to take care of the unwatched pages and adding them to their watchlists.
:::This is exactly the kind of thing that could have good use of a "trusted" user group between autoconfirmed and admin. Such a group should only be assigned manually, say by two admins marking the user as "trusted". That group could include stuff like rollback etc.
:::--] (]) 16:25, 25 January 2010 (UTC)
::::There is a similar tool on the Toolserver - ]. It doesn't provide a list, but it can provide the information for any page, so its somewhat more useful than a list that only covers a small fraction. Anyone can use it, but it won't give numbers for <30 watchers unless you're listed ] and you have a ] account. <span style="font-family:Broadway">]]</span> 18:14, 25 January 2010 (UTC)
:::::Which is what I mentioned near the top of this thread (except my comment didn't instruct how to gain full access). :-) ] (]) 06:38, 26 January 2010 (UTC)

== Banner advert for Haiti Fundraising ==

Can we can change the ] to a banner aimed at raising funds for Haiti? Something like this:

]


--] (]) 10:26, 20 January 2010 (UTC)

:I doubt it, we're supposed to have a neutral pov and part of that I'm afraid includes being neutral in fundraising - we use our 'fame' to raise money for ourselves and not others. Wikipedias attract millions of millions of page views around the world daily, do we wish to be seen to be advocating the donation of money to one disaster and ignoring others? Do we wish to be put in a position where people clamour for a site notice update every time there is a humanitarian aid requirement somewhere in the world? Do we wish to field complaints from the press and public saying 'Hey isn't that advertising? you said you'd never do that'? So, my view at least, is probably not. ] (]) 11:57, 20 January 2010 (UTC)

::I think we could include an external link at ]. ] (] | ]) 13:12, 20 January 2010 (UTC)

:No, we've gone over this many times, we don't advertise for anything, even charitable causes. We also don't use the articles to encourage donations per ]. ''']''' <sup>]</sup> 14:46, 20 January 2010 (UTC)

::Are you referring to some supposedly common interpretation of NPOV? Because the word donate does not occur in that policy. Also, who is "we"? I don't recall ever talking about donations on Misplaced Pages. Do you mean to say "this has been discussed in the community"? That's fine, but you don't need to be so dismissive towards good faith efforts. ] (] | ]) 16:19, 20 January 2010 (UTC)

:: Quote from ]: "All Misplaced Pages '''articles''' and other '''encyclopedic content''' must be written from a neutral point of view..." "...This is non-negotiable and expected of all '''articles''' and all editors."(my emphasis)

:: A ] isn't really part of the encyclopedic content, nor is it likely to be confused with Misplaced Pages content; so the NPOV rules aren't really an issue here. Though if we were really worried about NPOV we might make a case for not inculding such an appeal on the page about the actual disaster. --] (]) 13:13, 21 January 2010 (UTC)

:::Actually, if we all agreed that we'd want to run a Haiti banner for a week, I don't see why we couldn't. Proposals for continuous ads for charities are of course impossible, and we can't go running ad campaigns for just any disaster either, but a one week ad once every 3 years won't hurt anyone I think. The larger problem I see is that fundraising is a rather country dependent usually. Does anyone know of a portal or something that will geoip redirect folks to appropriate donation organisations or something ? Otherwise this is gonna be a bit difficult. —] (] • ]) 16:35, 20 January 2010 (UTC)

::::No, we shouldn't. NPOV and all that what Misplaced Pages isn't. Also, how to define which charities or disasters we allow donationa ppeals for? For big ones, why should we do it - Haiti donationa appeals are already everywhere. And no matter how heart-rending a small disaster may be for those involved, we cannot do those either, because then we would have to do them all. Nope, not appropriate for us. ] (]) 04:25, 21 January 2010 (UTC)
:::::Agreed. It isn't Misplaced Pages's place to advocate for a cause. Besides, if you have access to any form of media, then you should already know of at least a dozen ways to donate to disaster relief. A banner here is unnecessary. ]] 04:30, 21 January 2010 (UTC)

::OK, so some of you don´t want to be seen to support Save The Children (or any charity) because:
* Some people think that such a banner might somehow betray Misplaced Pages's commitment to having a Neutral Point of View on charity, charities, charitible giving in general and Save the Children (or which ever charity we promoted) in particular
* others are worried that it would somehow damage the project's credibility if some other people, saw the banner and mistakenly assumed that Misplaced Pages advocates life and hope over death and distruction
* Also we wouldn't want to waste our precious time deciding which disasters deserved a site notice and which one's didn't, and we don't think the WM Foundation should worry about this either
* some people think that we shouldn't do anything because other people are doing it fine without us
So, my next question is: do you have the same objections if we were to use the site notice to support and to raise awareness of the ] or, of more immediate and practical help, ?--] (]) 12:22, 21 January 2010 (UTC)

I'm willing to be the bastard here: what does asking for funds for Haiti have to do with an encyclopedia? The two are completely unrelated, and a marriage between the two shouldn't be attempted. Good cause, yes, but totally irrelevant. ] <span style="color: #999;">// ] // ] //</span> 20:56, 25 January 2010 (UTC)

== Possible way forward on BLP semiprotection - proposal ==

Okay I propose the following:

*A bot runs and automatically semiprotects all BLPs created within the past seven days. It runs weekly. Thus the default state of any BLP is semiprotected. The first run the seven day prerequisite is turned off so all BLPs are captured.

*Any editor can request unprotection, and any admin can unprotect with a statement that it will then be on their watchlist. Thus we can liberally unprotect articles with editors vouching for and fixing content.

*Thus we have a functioning ''de facto'' flagged revisions, where admins can readily unlock articles for editing by anon IPs. Hopefully the emphasis at ] will accommodate this, with more requests for unprotection and less for protection.

*This is a compromise and practical way forward, where we can protect unwatched BLPs in one go, ''and'' try to accommodate IPs.

*We are not creating yet more discussion boards and are attempting to work with what we've got.

===Support===
# ] (] '''·''' ]) 01:52, 21 January 2010 (UTC)
# --]] 02:16, 21 January 2010 (UTC) - This would be a true solution, really addressing libel and vandalism, without any unnecessary removal of content.
# ] (]), as long as the "liberal unprotection" is actually ''liberal'' and not simply lip service.
# Yes, this actually addresses the problem, instead of exploiting it as an excuse for mass deletion. Hence it will no doubt attract little interest from the usual crowd who claim to care about the "BLP problem"; however, it can have my support.--] (]) 07:42, 21 January 2010 (UTC)
#This looks like a sensible solution to a serious problem ] (]) 08:53, 21 January 2010 (UTC)
# This also establishes a "revert to" version if an article is severely vandalized. Rather than having to nuke an article entirely, we have a base entry to build from. ''']''' <small>]</small> 14:28, 21 January 2010 (UTC)

===Oppose===
#I refuse to support anything that has a class of articles semiprotected by default. No matter how "liberal" unprotection is. Protection should always be a last resort option, after blocking and after trying to solve the problem if possible. Never a default. -]<small>(])</small> 03:39, 21 January 2010 (UTC)
::Really? ''Anything?'' How about userpages? Nobody has any business editing your userpage except you; least of all an anonymous IP. I think all userpages should be semi-protected by default. While I'm neutral on this specific proposal, your blanket statement seems a little extreme. ~] <small>(])</small> 00:02, 27 January 2010 (UTC)
#Protection without review is bordering on worthless. <span style="font-family:Broadway">]]</span> 03:52, 21 January 2010 (UTC)
#'''Oppose''' It will make an individual admin responsible for the contents of a BLP, whether they have edited it or not. It could lead to issues of ownership for that admin over the article. I would like to see better protection of BLPs, but this is not the answer. ] (]) 05:59, 21 January 2010 (UTC)
#Accepting default semi-protection in any form starts us down a dangerous path. <font face="Century Gothic">] <small>]</small> 15:01, 21 Jan 2010 (UTC)</font>
#I don't think we should start with the assumption that an article is violating policy and then eventually(perhaps never) get around to showing it does not. Given our premise as an encyclopedia anyone can edit then I don't think we should prevent IPs from editing until we have a good reason to in the specific case. Protection should not be used like that in my opinion and our best practices seem to agree. We should respond to BLP problems promptly, but we should not treat all BLPs like they are a problem. ]<small> <sup>(Need help? ])</sup></small> 16:05, 21 January 2010 (UTC)
#It will not work. BLPs cannot be protected by half measures. Semiprotection raises the cost of doing vandalism so the insufficiently motivated will not do it, neither will a random passerby be able to undo it. What are left are the more dedicated folks out to get LPs, which are sophisticated, or at least motivated enough to break through semiprotection's very weak barriers. You'll need to full protect BLPs if you want to get anywhere. There are times when something is worth exactly the same as nothing. This is one of them. --] (]) 16:49, 21 January 2010 (UTC)
#'''Oppose''' This is not in any way de facto flagged revisions, because you need to go ask an administrator for permission before editing. How many non-autoconfirmed editors would even know how to do that? No, this is the same as permanent semi-protection of all BLPs, which is too harsh. Permanent flagged-protection of all BLPs, that would be a good idea that doesn't prevent IPs from editing, including fixing libel. --] (]) 15:21, 25 January 2010 (UTC)

===Neutral===
#Sometimes IPs remove libel. ] 02:24, 21 January 2010 (UTC)
#:True, but anything is better then certain bad actors around here starting a deletionist/inclusionist civil war (although, it's probably too late now...).<br/>— ] (]) 02:37, 21 January 2010 (UTC)
#Sounds good in theory, but slightly bureaucratic and it adds a lot of likely un-needed work. Instead the devs should get a good and loud nudging to get flagged revs up and running. ] <small> ]]'''</small> 08:54, 21 January 2010 (UTC)
# Potentially a good idea, but I suggest that we look at what any "unintended consequences" might be, first. I would suggest, instead, that all such articles, rather than be "semi-protected" be given a header stating "This article has not been reviewed for accuracy" which would accomplish much of what appears to be the goal (note that this also would apply to "unreferenced BLPs" and thus allow editors time to add references for notable people, and actually Prod or AfD the un-notable ones. ] (]) 15:22, 21 January 2010 (UTC)

===Discuss===
What does this plan to solve? It's obviously a well-intentioned and reasonable proposal, but I don't think it addresses our main concern: stale, unsourced biographies. &ndash;''']'''&nbsp;&#124;&nbsp;] 02:19, 21 January 2010 (UTC)

:It is prospective so that large numbers of BLPs are given some form of protection from wandering IPs, and hopefully significantly reducing vandalism. By using it as above, it also acts as a flagged revision, with the RFPP board serving as a central place where articles can be unprotected and watched while they are edited. Julian it is not the unsourced that is the problem, but the damaging material. This helps with all BLPs. Everyone is focussing on unsourced BLPs, but there are stacks more with maybe one or two inline references for which a large chunk of article might be problematic. There are many angles we can approach this from, and this is just one to at least slow down future vandalism. ] (] '''·''' ]) 02:28, 21 January 2010 (UTC)
:Stale unsourced anything aren't a particular concern. Libel is the concern with BLPs; being unsourced is a correctible defect like with any other article (and staleness isn't a problem while we have no deadline).--] (]) 07:45, 21 January 2010 (UTC)
:See my suggestion above regarding semi-protection. Also note that, as far as I can tell, WP has not had any libel suits under US or Florida law, which are the only applicable laws. In an earlier discussion regarding BLPs, Mike Godwin sent a missive telling us not to make policies which showed any implication of recognition of other laws (as a matter of WMF concern). ] (]) 15:25, 21 January 2010 (UTC)

::Are IPs even the main source of BLP problems? I thought it was more about sloppy editing(ie repeating rumors, quoting unreliable source etc...) ]<small> <sup>(Need help? ])</sup></small> 16:09, 21 January 2010 (UTC)


Strongly support automatically checkusering all active users (new and existing) at regular intervals. If it were automated -- e.g., a script runs that compares IPs, user agent, other typical subscriber info -- there would be no privacy violation, because that information doesn't have to be disclosed to any human beings. Only the "hits" can be forwarded to the CU team for follow-up. I'd run that script daily. If the policy forbids it, we should change the policy to allow it. It's mind-boggling that Misplaced Pages doesn't do this already. It's a basic security precaution. (Also, email-required registration and get rid of IP editing.) ] (]) 02:39, 23 November 2024 (UTC)
== Propose to amend our FlaggedRevs proposal ==


:I don't think you've been reading the comments from people who know what they are talking about. There would be hundreds, at least, of hits per day that would require human checking. The policy that prohibits this sort of massive breach of privacy is the Foundation's and so not one that en.wp could change even if it were a good idea (which it isn't). ] (]) 03:10, 23 November 2024 (UTC)
In lieu of the BLP deletions, I consider our current FlaggedRevs proposal outdated and passed by by reality. Instead I propose we immediately adopt the german model. Reasons
::A computer can be programmed to check for similarities or patterns in subscriber info (IP, etc), and in editing activity (time cards, etc), and content of edits and talk page posts (like the existing language similarity tool), with various degrees of certainty in the same way the Cluebot does with ORES when it's reverting vandalism. And the threshold can be set so it only forwards matches of a certain certainty to human CUs for review, so as not to overwhelm the humans. The WMF can make this happen with just $1 million of its $180 million per year (and it wouldn't be violating its own policies if it did so). Enwiki could ask for it, other projects might join too. ] (]) 05:24, 23 November 2024 (UTC)
# It requires no developer work to make the specifics from our original request possible. This is much kinder on Aaron who is doing a lot of work to make some rare situations possible in the FlaggedRevs extension, that will likely only be used for what was gonna be our test period. A wast of development effort if you ask me.
:::"Oh now I see what you mean, Levivich, good point, I guess you know what you're talking about, after all."
# It is in the interest of our BLP articles
:::"Thanks, Thryduulf!" ] (]) 17:42, 23 November 2024 (UTC)
# Why do we need a test if if it's already clear that BLP will trump everything ? Statements by arbcom members and Jimmy Wales clearly indicate a full endorsement of the BLP deletions. Some of those deletions might have been prevented if we had adopted the german model 2 years ago. To protect people against slander and to keep the Misplaced Pages growing, we clearly also need FlaggedRevs going into the future.
::::I seem to have missed this comment, sorry. However I am ''very'' sceptical that sockpuppet detection is meaningfully automatable. From what CUs say it is as much art as science (which is why SPI cases can result in determinations like "possilikely"). This is the sort of thing that is difficult (at best) to automate. Additionally the only way to reliably develop such automation would be for humans analyse and process a massive amount of data from accounts that both are and are not sockpuppets and classify results as one or the other, and that anaylsis would be a massive privacy violation on its own. Assuming you have developed this magic computer that can assign a likelihood of any editor being a sock of someone who has edited in the last three months (data older than that is deleted) on a percentage scale, you then have to decide what level is appropriate to send to humans to check. Say for the sake of argument it is 75%, that means roughly one in four people being accused are innocent and are having their privacy impinged unnecessarily - and how many CUs are needed to deal with this caseload? Do we have enough? SPI isn't exactly backlog free and there aren't hoards of people volunteering for the role (although unbreaking RFA ''might'' help with this in the medium to long term). The more you reduce the number sent to CUs to investigate, the less benefit there is over the status quo.
# Why should we want to limit the test/usage of FlaggedRevs to biographies, if the rules and concerns of BLP are not limited to biographies ?
::::In addition to all the above, how similar is "similar" in terms of articles edited, writing style, timecard, etc? How are you avoiding legitimate sockpuppets? ] (]) 18:44, 23 November 2024 (UTC)
# Why do we need metrics on the test phase of FlaggedRevs, if this is clearly the way we are going ? What are we gonna do? Reverse position on BLP issues if it affects our readership too much? Seems unlikely.
:::::You know this already but for anyone reading this who doesn't: when a CU "checks" somebody, it's not like they send a signal out to that person's computer to go sniffing around. In fact, all the subscriber info (IP address, etc.) is already logged on the WMF's server logs (as with any website). A CU "check" just means a volunteer CU gets to look at a portion of those logs (to look up a particular account's subscriber info). That's the privacy concern: we have rules, rightfully so, about when volunteer CUs (not WMF staff) can read the server logs (or portions of them). Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs. Privacy is only an issue when those logs are revealed to volunteer CUs.
# Why do we need to wait for interface improvements ? The usability team is always working in parallel, why should this part of the software be any different ?
:::::So... feeding the logs into software in order to train the software doesn't violate anyone's policy. It's just letting a computer read its own files. Human verification of the training outcomes also doesn't have to violate anyone's privacy -- just don't use volunteer CUs to do it, use WMF staff. Or, anonymize the training data (changing usernames to "Example1", "Example2", etc.). Or use historical data -- which would certainly be part of the training, since the most effective way would be to put ''known'' socks into the training data to see if the computer catches them.
I think that counters all of the reasons that have caused our earlier calls for immediate deployment of flagged revisions to be ignored does it not ? Focus on making sure it is stable enough for en.wp and let's just run that code. —] (] • ]) 10:32, 22 January 2010 (UTC)
:::::Anyway, training the system won't violate anyone's privacy.
#'''Oppose'''. This is impractical, because we have too many articles. ]_] 19:30, 23 January 2010 (UTC)
:::::As for the hit rate -- 75% would be way, way too low. We'd be looking for definitely over 90% or 95%, and probably more like 99.something percent. Cluebot doesn't get vandalism wrong 1 out of 4 times, neither should CluebotCU. Heck, if CluebotCU can't do better than 75%, it's not worth doing. A more interesting question is whether the 99.something% hit rate would be helpful to CUs, or whether that would only catch the socks that are so obvious you don't even need CU to recognize them. Only testing in the field would tell.
#'''Oppose''' for the same reason Ruslik0 does - between the number of articles we have and the level of editing, we would either have an enormous backlog of unapproved edits, draining our volunteer editors' time and in practice often denying anonymous users the right to edit, or else we would have an enormous number of edits approved without scrutiny, causing potential harm to the encyclopedia and (again) denying anonymous editors the right to actually fix the problems caused by approving harmful edits. Most likely we would have the worst case scenario - both at once. <span style="white-space:nowrap">— ] (])</span> 19:50, 23 January 2010 (UTC)
:::::But overall, AI looking for patterns, and checking subscriber info, edit patterns, and the content of edits, would be very helpful in tamping down on socking, because the computer can make far more checks than a human (a computer can look at 1,000 accounts and a 100,000 edits no problem, which no human can do), it'll be less biased than humans, and it can do it all without violating anyone's privacy -- in fact, lowering the privacy violations by lowering the false positives, sending only high-probability (90%+, not 75%+) to humans for review. And it can all be done with existing technology, and the WMF has the money to do it. ] (]) 19:38, 23 November 2024 (UTC)
#'''Strong oppose'''. First, political reasons: it took a lot of argument to reach the current plan, and I'd prefer to not have to repeat that. Second, the English Misplaced Pages is by far the largest wiki. Even if the rate of backlog is acceptable on dewiki or Wikinews, there's no guarantee it wouldn't be a problem here. Third, now that we've committed to the current plan, it makes little sense to abandon the development work that's been done to further it. I share the concern and frustration at the delay, but I'm sure that a more usable, more open version of FlaggedRevs is in the interest of long-term use and adoption. {&#123;]&#124;]&#124;]&#124;⚡}&#125; 17:07, 26 January 2010 (UTC) (iPod edit)
::::::The more you write the clearer you make it that you don't understand checkuser or the WMF's policies regarding privacy. It's also clear that I'm not going to convince you that this is unworkable so I'll stop trying. ] (]) 20:42, 23 November 2024 (UTC)
#:Thanks for the background. I take it ] and ] is what people are currently working towards, based on ]. Correct? --'''<font color="#0000FF">]</font><font color=" #FFBF00">]</font>''' 21:40, 26 January 2010 (UTC)
:::::::Yeah it's weird how repeatedly insulting me hasn't convinced me yet. ] (]) 20:57, 23 November 2024 (UTC)
#::Yes, that is the current general plan, and that poll is the one that confirmed the current plan. There was an earlier poll on using a more German-like implementation of FlaggedRevs, but at around 60% support it was deemed that there was insufficient consensus. The flagged protection and proposed revisions (FPPR) poll was closer to 80%, which is generally taken as enough of a supermajority to be a rough consensus. {&#123;]&#124;]&#124;]&#124;⚡}&#125; 17:59, 27 January 2010 (UTC)
::::::::If you are are unable to distinguish between reasoned disagreement and insults, then it's not at all weird that reasoned disagreement fails to convince you. ] (]) 22:44, 23 November 2024 (UTC)
::::::{{ping|Levivich}} Whatever existing data set we have has too many biases to be useful for this, and this is going to be prone to false positives. AI needs ''lots'' of data to be meaningfully trained. Also, AI here would be learning a '']''; when the output is not in fact a function of the input, there's nothing for an AI model to target, and this is very much the case here. On ], where I am a CheckUser, almost all edit summaries are automated even for human edits (just like clicking the rollback button is, or undoing an edit is by default), and it is ''very'' hard to meaningfully tell whether someone is a sock or not without highly case-specific analysis. No AI model is better than the data it's trained on.
::::::Also, about the privacy policy: you are completely incorrect when you {{tq|"Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs"}}. Staff can only access that information on a ] basis, just like CheckUsers, and data privacy laws like the EU's and California's means you cannot just do whatever random thing you want with the information you collect from users about them.--] ] 21:56, 23 November 2024 (UTC)
:::::::So which part of the ] would prohibit the WMF from developing an AI that looks at server logs to find socks? Do you want me to quote to you the portions that explicitly disclose that the WMF uses personal information to develop tools and improve security? ] (]) 22:02, 23 November 2024 (UTC)
::::::::I mean yeah that would probably be more productive than snarky bickering ]] 22:05, 23 November 2024 (UTC)
::::::::{{ping|Levivich}} Did you read the part where I mentioned privacy ''laws''? Also, in this industry ''no one'' is allowed unfettered usage of private data even internally; there are ''internal'' policies that govern this that are broadly similar to the privacy policy. It's one thing to ''test'' a proposed tool on an IP address like ], but it's another to train an AI model on it. Arguably an equally big privacy concern is the usage of ''new'' data from new users after the model is trained and brought online. The foundation is already hiding IP addresses by default even for anonymous users soon, and they will not undermine that mission through a tool like this. Ultimately, the ] has to assume legal responsibility and liability for such a thing; put yourself in their position and think of whether they'd like the liability of something like this.--] ] 22:13, 23 November 2024 (UTC)
:::::::::So can you quote a part of the privacy policy, or a part of privacy laws, or anything, that would prohibit feeding server logs into a "Cluebot-CU" to find socking?
:::::::::Because I can quote the part of the ] that allows it, and it's a lot:
:::::::::{{tq2|We may use your public contributions, either aggregated with the public contributions of others or individually, '''to create new features or data-related products''' for you or to '''learn more about how the Wikimedia Sites are used''' ... <p>Because of how browsers work, we receive some information automatically when you visit the Wikimedia Sites ... This information includes the type of device you are using (possibly including unique device identification numbers, for some beta versions of our mobile applications), the type and version of your browser, your browser's language preference, the type and version of your device's operating system, in some cases the name of your internet service provider or mobile carrier, the website that referred you to the Wikimedia Sites, which pages you request and visit, and the date and time of each request you make to the Wikimedia Sites. <p>Put simply, we use this information to enhance your experience with Wikimedia Sites. For example, '''we use this information to administer the sites, provide greater security, and fight vandalism'''; optimize mobile applications, customize content and set language preferences, '''test features to see what works, and improve performance; understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites, and analyze trends'''. ... <p>We actively collect some types of information with a variety of commonly-used technologies. These generally include tracking pixels, JavaScript, and a variety of "locally stored data" technologies, such as cookies and local storage. ... Depending on which technology we use, locally stored data may include text, Personal Information (like your IP address), and information about your use of the Wikimedia Sites (like your username or the time of your visit). ... '''We use this information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services.''' ... <p>We and our service providers use your information ... to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ... '''To fight spam, identity theft, malware and other kinds of abuse.''' ... '''To test features to see what works, understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites and analyze trends.''' ... <p>When you visit any Wikimedia Site, we automatically receive the IP address of the device (or your proxy server) you are using to access the Internet, which could be used to infer your geographical location. ... '''We use this location information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services'''. For example, we use this information '''to provide greater security''', optimize mobile applications, and '''learn how to expand and better support Wikimedia communities'''. ... <p>'''We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use''', this Privacy Policy, or any Wikimedia Foundation or user community-based policies. ... '''We may also disclose your Personal Information if we reasonably believe it necessary to detect, prevent, or otherwise assess and address potential spam, malware, fraud, abuse, unlawful activity, and security or technical concerns'''. ... '''To facilitate their work, we give some developers limited access to systems that contain your Personal Information, but only as reasonably necessary for them to develop and contribute to the Wikimedia Sites.''' ...}} Yeah that's a lot. Then there's that says {{tq2|It is important for us to be able to make sure everyone plays by the same rules, and sometimes that means we need to investigate and share specific users' information to ensure that they are. <p>For example, user information may be shared when a CheckUser is investigating abuse on a Project, such as suspected use of malicious '''"sockpuppets"''' (duplicate accounts), vandalism, harassment of other users, or disruptive behavior. If a user is found to be violating our Terms of Use or other relevant policy, the user's Personal Information may be released to a service provider, carrier, or other third-party entity, for example, to assist in the targeting of IP blocks or to launch a complaint to the relevant Internet Service Provider.}}
:::::::::So using IP addresses, etc., to develop new tools, to test features, to fight violations of the Terms of Use, and disclosing that info to Checkusers... all explicitly permitted by the Privacy Policy. ] (]) 22:22, 23 November 2024 (UTC)
::::::::::{{ping|Levivich}} {{Tq|"We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use"}} &ndash; "reasonably believed to be necessary" is not going to hold up in court when it's sweepingly applied to everyone. This doesn't even take into consideration the laws I mentioned, like ]. I'm not a lawyer, and I'm guessing neither are you. If you want to be the one assuming the legal liability for this, contact the board today and sign the contract. Even then they would probably not agree to such an arrangement. So you're ]: only the foundation could even consider assuming this risk. Also, it's clear that you do not have a single idea of how developing something like this works if you think it can be done for $1 million. Something this complex has to be done ''right'' and tech salaries and computing resources are expensive.--] ] 22:28, 23 November 2024 (UTC)
:::::::::::What I am suggesting does not involve sharing everyone's data with Checkusers. It's pretty obvious that looking at their own server logs is "necessary to enforce or investigate potential violations of our Terms of Use". Five people is how big the WMF's ] team is, @ $200k each, $1m/year covers it. Five people is enough for that team to improve ORES, so another five-person team dedicated to "ORES-CU" seems a reasonable place to start. They could double that, and still have like $180M left over. ] (]) 22:40, 23 November 2024 (UTC)
::::::::::::{{ping|Levivich}} Yeah no, lol. $200k each is not a very competitive total compensation, considering that that needs to include benefits, health insurance, etc. This doesn't include their manager or the hefty hardware required to run ML workflows. It doesn't include the legal support required given the data privacy law compliance needed. Capriciously looking at the logs does not count; accessing data of users the foundation cannot reasonably have said to be likely to cause abuse is not permissible. This all aside from the bias and other data quality issues at hand here. You can delude yourself all you want, but ]. I'm finished arguing with you anyways, because this proposal is either way dead on arrival.--] ] 23:45, 23 November 2024 (UTC)
:::::::::::::@], haggling over the math here isn't really important. You could quintuple the figures @] gave and the Foundation would still have millions upon millions of dollars left over. -- ] (]) 23:48, 23 November 2024 (UTC)
::::::::::::::{{ping|asilvering}} The point I'm making is Levivich does not understand the complexity behind this kind of thing and thus his arguments are not to be given weight by the closer. ] ] 23:56, 23 November 2024 (UTC)
:::::::::::::As a statistician/data scientist, @] is correct about the technical side of this—building an ML algorithm to detect sockpuppets would be pretty easy. Duplicate user algorithms like these are common across many websites. For a basic classification task like this (basically an ML 101 homework problem), I think $1 million is about right. As a bonus, the same tools could be used to identify and correct for possible canvasing or brigading, which behaves a lot like sockpuppetry from a statistical perspective. A similar algorithm is already used by Twitter's ] feature.
:::::::::::::IANAL, so I can't comment on the legal side of this, and I can't comment on whether that money would be better-spent elsewhere since I don't know what the WMF budget looks like. Overall though, the technical implementation wouldn't be a major hurdle. ] (]) 20:44, 24 November 2024 (UTC)
::::::::::::::Third-party services provide this kind of algorithm-based account fraud protection as an alternative to building and maintaining internally. <span style="background:#F3F3F3; color:inherit; padding:3px 9px 4px">]</span> 23:41, 24 November 2024 (UTC)
::::::::::::::Building such a model is only a small part of a real production system. If this system is to operate on all account creations, it needs to be at least as reliable as the existing systems that handle account creations. As you probably know, data scientists developing such a model need to be supported by software engineers and site reliability engineers supporting the actual system. Then you have the problem of ''new'' sockers who are not on the list of sockmasters to check against. Non-English-language speakers often would be put at a disadvantage too. It's not as trivial as you make it out to be, thus I stand by my estimate.--] ] 06:59, 25 November 2024 (UTC)
:::::::::::::::None of you have accounted for ].
:::::::::::::::I don't think we need to spend more time speculating about a system that WMF Legal is extremely unlikely to accept. Even if they did, it wouldn't exist until several years from now. Instead, let's try to think of things that we can do ourselves, or with only a very little assistance. Small, lightweight projects with full community control can help us now, and if we prove that ____ works, the WMF might be willing to adopt and expand it later. ] (]) 23:39, 25 November 2024 (UTC)
::::::::::::::::That's a mistake -- doing the same thing Misplaced Pages has been doing for 20+ years. The mistake is in leaving it to volunteers to catch sockpuppetry, rather than insisting that the WMF devote significant resources to it. And it's a mistake because the one thing we volunteers ''can't'' do, that the WMF ''can'' do, is comb through the server logs looking for patterns. ] (]) 23:44, 25 November 2024 (UTC)
:::::::::::::::::Not sure about the "building an ML algorithm to detect sockpuppets would be pretty easy" part, but I admire the optimism. It is certainly the case that it is possible, and people have done it with a surprising level of success a very long time ago in ML terms e.g. https://doi.org/10.1016/j.knosys.2018.03.002. These projects tend to rely on the category graph to distinguish sock and non-sock sets for training, the categorization of accounts as confirmed or suspected socks. However, the category graph is woefully incomplete i.e. there is information in the logs that is not reflected in the graph, so ensuring that all ban evasion accounts are properly categorized as such might help a bit. ] (]) 03:58, 26 November 2024 (UTC)
::::::::::::::::::Thankfully, we wouldn't have to build an ML algorithm, we can just use one of the existing ones. Some are even open source. Or WMF could use a third party service like the aforementioned sift.com. ] (]) 16:17, 26 November 2024 (UTC)
:::::::::::::::::::Let me guess: Essentially, you would like their machine-learning team to use Sift's {{tq|AI-Powered Fraud Protection}}, which from what I can glance, handles {{tq|safeguarding subscriptions to defending digital content and in-app purchases}} and {{tq|helps businesses reduce friction and stop sophisticated fraud attacks that gut growth}}, to provide the ability for us to {{tq|automatically checkuser all active users}}? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 16:25, 26 November 2024 (UTC)
::::::::::::::::::::The WMF already has the ability to "automatically checkuser all users" (the verb "checkuser" just means "look at the server logs"), I'm suggesting they use it. And that they use it in a sophisticated way, employing (existing, open source or commercially available) AI/ML technologies, like the same kind we already use to automatically revert vandalism. Contrary to claims here, doing so would not be illegal or even expensive (comparatively, for the WMF). ] (]) 16:40, 26 November 2024 (UTC)
:::::::::::::::::::::So, in my attempt to get things set right and steer towards a consensus that is satisfactory, I sincerely follow-up: ] that in this vast, uncharted sea? And could this mean any more in the next 5 years? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 16:49, 26 November 2024 (UTC)
::::::::::::::::::::::What lies beyond is ]. ] (]) 17:26, 26 November 2024 (UTC)
:::::::::::::::::::::::So, @], I think the answer to your question is "tell the WMF we really, really, really would like more attention to sockpuppetry and IP abuse from the ML team". -- ] (]) 17:31, 26 November 2024 (UTC)
::::::::::::::::::::::::Which I don't suppose someone can at the next board meeting on December 11? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 18:00, 26 November 2024 (UTC)
:::::::::::::::::::I may also point to ], where they mention {{tq|development in other areas, such as social media features and '''machine learning expertise'''}}. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 16:36, 26 November 2024 (UTC)
::::::::::::::::::::e.g. ] ] (]) 17:02, 26 November 2024 (UTC)
:::::::::::::::::::::And that mentions , still in beta it seems. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 17:10, 26 November 2024 (UTC)
:::::::::::::::::::::'''3 days!''' When I first posted my comment and some editors responded that I didn't know what I was talking about, it can't be done, it'd violate the privacy policy and privacy laws, WMF Legal would never allow it... I was wondering how long it would take before somebody pointed out that this thing that can't be done has already been done and has been under development for ].
:::::::::::::::::::::''Of course'' it's already under development, it's pretty obvious that the same Misplaced Pages that developed ], one of the world's earlier and more successful examples of ML applications, would try to employ ML to fight multiple-account abuse. I mean, I'm obviously not gonna be the first person to think of this "innovation"!
:::::::::::::::::::::Anyway, it took 3 days. Thanks, Sean! ] (]) 17:31, 26 November 2024 (UTC)
:::::::::::::::::{{outdent|4}} Unlike what is being proposed, SimilarEditors only works based on publicly available data (e.g. similarities in editing patterns), and not IP data. To quote the page Sean linked, {{tq|in the model's current form, we are only considering public data, but most saliently private data such as IP addresses or user-agent information are features currently used by checkusers that could be later (carefully) incorporated into the models}}.{{pb}}So, not only the current model doesn't look at IP data, the research project also acknowledges that actually using such data should only be done in a "careful" way, because of those very same privacy policy issues quoted above.{{pb}}On the ML side, however, this does proves that it's being worked on, and I'm honestly not surprised at all that the WMF is working on machine learning-based tools to detect sockpuppets. ] (] · ]) 17:50, 26 November 2024 (UTC)
::::::::::::::::::Right. We should ask WMF to do the {{tqq|later (carefully) incorporated into the models}} part (especially since it's now later). BTW, the already pulls IP and other metadata. SimilarExtensions (a tool that uses the API) doesn't release that information to CheckUsers, by design. And that's a good thing, we can't just release all IPs to CheckUsers, it does indeed have to be done carefully. But user metadata ''can'' be used. What I'm suggesting is that the WMF ''should'' proceed to develop these types of tools (including the careful use of user metadata). ] (]) 17:57, 26 November 2024 (UTC)
:::::::::::::::::{{outdent|1}} Not really clear that they're pulling IP data from logged-in users. The relevant sections reads:{{pb}}{{tqb|<code>USER_METADATA</code> (203MB): for every user in <code>COEDIT_DATA</code>, this contains basic metadata about them (total number of edits in data, total number of pages edited, user or IP, timestamp range of edits).}}{{pb}}This reads like they're collecting the username ''or'' IP depending on whether they're a logged-in user or an IP user. ] (] · ]) 18:14, 26 November 2024 (UTC)
:::::::::::::::::In a few years people might look back on these days when we only had to deal with simple devious primates employing deception as the halcyon days. ] (]) 18:33, 26 November 2024 (UTC)
::::::::::::::::I assumed 1 million USD/year ''was'' accounting for Hofstadter's law several times over. Otherwise it feels wildly pessimistic. ] (]) 15:57, 26 November 2024 (UTC)
{{hat|IP range ] blocked by a CU}}
:::Why do you guys hate the WMF so much? If it weren’t for them, you wouldn’t have this website at all. ] (]) 23:51, 28 November 2024 (UTC)
::::We don’t. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 01:13, 29 November 2024 (UTC)
:::::Then why do you guys always whine and complain about how incompetent they are and how much money they make and are actively against their donation drives? ] (]) 01:29, 29 November 2024 (UTC)
::::::We don't. ] (]) 02:47, 29 November 2024 (UTC)
:::::::Don’t “we don’t” me again. ] (]) 03:11, 29 November 2024 (UTC)
::::::This may be surprising, but it turns out there's more than one person on Misplaced Pages, and many of us have different opinions on things. You're probably thinking of @]'s essay.
::::::I disagree with his argument that the WMF is incompetent, but at the same time, ]. Just because the WMF spent their first $20 million ''extremely'' well (on creating Misplaced Pages) doesn't mean giving them $200 million would make them 10× as good. Nobody here thinks the WMF budget should be cut to $0; there's just some of us who think it needs a haircut.
::::::For me it comes down to, "if you don't donate to the WMF, ]"? I'd rather you give that money to ]—feeding African children is more important than reskinning Misplaced Pages—but if you won't, I'd doubt giving it to the WMF is worse than whatever else you were going to spend it on. Whether we should cut back on ads depends on whether this money is coming out of donors' charity budgets or their regular budgets. ] (]) 03:10, 29 November 2024 (UTC)
:::::::I already struggle enough with prioritizing charities and whether which ones are ethical or not and how I should be spending every single penny I get on charities dealing with PIA and trans issues because those are the most oppressed groups in the world right now. The WMF is not helping people who are actively getting killed and having their rights taken away therefore they are not important. ] (]) 03:15, 29 November 2024 (UTC)
::::::::In that case, I'd suggest checking out ], which has some very good recommendations. That said, this subthread feels wildly off-topic. ] (]) 03:33, 29 November 2024 (UTC)
:::::::::So goes this whole discussion; but to give a slightly longer answer to the IP: We’re not telling them to ], we’re trying (despite everything) to establish relations, consensus and mutual trust. And hopefully long-term progress on key areas of contention. We ''don’t'' hate them, or else they’ll dismiss us completely. <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 03:44, 29 November 2024 (UTC)
{{hab}}
:Any such system would be subject to numerous biases or be easily defeatable. Such an automated anti-abuse system would have to be exclusively a foundation initiative as only they have the resources for such a monumental undertaking. It would need its own team of developers.--] ] 18:57, 23 November 2024 (UTC)
Absolutely no chance that this would pass. ], even though there isn't a flood of opposes. There are two problems:
#The existing CheckUser team barely has the bandwidth for the existing SPI load. Doing this on every single new user would be impractical and would enable ]'s by diverting valuable CheckUser bandwidth.
#Even if we had enough CheckUser's, this would be a severe privacy violation absolutely prohibited under the Foundation privacy policy.
The ''vast majority'' of vandals and other disruptive users don't need CU involvement to deal with. There's very little to be gained from this.--] ] 18:36, 23 November 2024 (UTC)
:It is perhaps an interesting conversation to have but I have to agree that it is unworkable, and directly contrary to foundation-level policy which we cannot make a local exemption to. En.wp, I believe, already has the largest CU team of any WMF project, but we would need ''hundreds'' more people on that team to handle something like this. In the last round of appointments, the committee approved exactly '''one''' checkuser, and that one was a returning former mamber of the team. And there is the very real risk that if we appointed a whole bunch of new CUs, some of them would abuse the tool. ] ] 18:55, 23 November 2024 (UTC)
::And its worth pointing out that the Committee approving too few volunteers for Checkuser (regardless of whether you think they are or aren't) is not a significant part of this issue. There simply are not tens of people who are putting themselves forward for consideration as CUs. Since 2016 54 applications (an average of per year) have been put forward for consideration by Functionaries (the highest was 9, the lowest was 2). Note this is total applications not applicants (more than one person has applied multiple times), and is not limited to candidates who had a realistic chance of being appointed. ] (]) 20:40, 23 November 2024 (UTC)
:::The dearth of candidates has for sure been an ongoing thing, it's worth reminding admins that they don't have to wait for the committee to call for candidates, you can put your name forward at any time by emailing the committee. ] ] 23:48, 24 November 2024 (UTC)
:Generally, I tend to get the impression from those who have checkuser rights that CU should be done as a last resort, and other, less invasive methods are preferred, and it would seem that indiscriminate use of it would be a bad idea, so I would have some major misgivings about this proposal. And given the ANI case, the less user information that we retain, the better (which is also probably why temporary accounts are a necessary and prudent idea despite other potential drawbacks). ] (]) 03:56, 23 November 2024 (UTC)
:Oppose. A lot has already been written on the unsustainable workload for the CU team this would create and the amount of collateral damage; I'll add in the fact that our most notorious sockmasters in areas like PIA already use highly sophisticated methods to evade CU detection, and based on what I've seen at the relevant SPIs most of the blocks in these cases are made with more weight given to the behaviour, and even then only after lengthy deliberations on the matter. These sort of sockmasters seem to have been in the OP's mind when the request was made, and I do not see automated CU being of any more use than current techniques against such dedicated sockmasters. And, has been mentioned before, most cases of sockpuppetry (such as run-of-the-mill vandals and trolls using throwaway accounts for abuse) don't need CU anyways. '']]'' 08:17, 24 November 2024 (UTC)
::These are, unfortunately, fair points about the limits of CU and the many experienced and dedicated ban evading actors in PIA. CU information retention policy is also a complicating factor. ] (]) 08:28, 24 November 2024 (UTC)
:::As I said in my original post, recidivist socks often get better at covering their "tells" each time making behavioural detection increasingly difficult and meaning the entire burden falls on the honest user to convince an Admin to take an SPI case seriously with scarce evidence. After many years I'm tired of defending various pages from sock POV edits and if WMF won't make life easier then increasingly I just won't bother, I'm sure plenty of other users feel the same way. ] (]) 05:45, 26 November 2024 (UTC)


=== SimilarEditors ===
#'''Support'''. I don't see what the number of articles has to do with it – while the Germans may have fewer articles, they also have far fewer editors.
The development of ] -- the type of tool that could be used to do what Mztourist suggests -- has been "stalled" since 2023 and downgraded to low-priority in 2024, according to its documentation page and related phab tasks (see e.g. ], ], ]). Anybody know why? ] (]) 17:43, 26 November 2024 (UTC)
#*Note that people who approve harmful edits would very quickly lose their user rights enabling them to approve such edits (it is not enough to have a registered user account, you need to be registered as a "trusted" user, and that privilege can quickly be withdrawn if it is undeserved).
#*Approving a new article version after edits by IPs or novice editors takes just as long as looking at the diff when the article pops up in your watchlist -- it just adds a mouse click to the process to confirm that you have seen it ("sighted" it). Really not a problem.
#*Articles that have had edits by IPs and novice users and need sighting show up with a red exclamation mark in Recent changes, and in your watchlist. You click on "Sight", get a diff, and click OK if it's okay, or on revert if it's vandalism. Having the red exclamation mark immediately helps you tell apart edits by recently registered accounts and trusted users.
#*The Wikis that have the system (Wikinews; German WP and 3 or 4 other WPs) do not experience backlogs. --'''<font color="#0000FF">]</font><font color=" #FFBF00">]</font>''' 11:26, 26 January 2010 (UTC)


:Honestly, the main function of that sort of thing seems to be compiling data that is already available on XTools and various editor interaction analyzers, and then presenting it nicely and neatly. I think that such a page could be useful as a sanity check, and it might even be worth having that sort of thing as a standalone toolforge app, but I don't really see why the WMF would make that particular extension a high priority. — ]&nbsp;<sub>]</sub> 17:58, 26 November 2024 (UTC)
== Words... ==
::Well, it doesn't have to be ''that particular extension'', but it seems to me that the entire "idea" has been stalled, unless they're working on another tool that I'm unaware of (very possible). (Or, it could be because of recent changes in domestic and int'l privacy laws that derailed their previous development advances, or it could be because of advancements in ML elsewhere making in-house development no longer practical.) <p>As to why the WMF would make this sort of problem a high priority, I'd say because the spread of misinformation on Misplaced Pages by sockpuppets is a big problem. Even without getting into the use of user metadata, just look at recent SPIs I filed, like ] and ]. That involved no private data at all, but a computer could have done automatically, in seconds, what took me hours to do manually, and those socks could have been uncovered ''before'' they made thousands and thousands of edits spreading misinformation. If the computer looked at private data as well as public data, it would be even more effective (and would save CUs time as well). Seems to me to be a worthy expenditure of 0.5% or 1% of the WMF's annual budget. ] (]) 18:09, 26 November 2024 (UTC)
:This looks really interesting. I don't really know how extensions are rolled out to individual wikis - can anyone with knowledge about that summarise if having this tool turned on (for check users/relevant admins) for en.wp is feasible? Do we need a RFC, or is this a "maybe wait several years for a phab ticket" situation? ]] 18:09, 26 November 2024 (UTC)
:I find it amusing that ~4 separate users above are arguing that automatic identification of sockpuppets is impossible, impractical, and the WMF would never do it—and meanwhile, the WMF is already doing it. ] (]) 19:29, 27 November 2024 (UTC)
::So, discussion is over? <span style="color:#7E790E;">2601AC47</span> (]<big>·</big>]<big>·</big>]) <span style="font-size:80%">Isn't a IP anon</span> 19:31, 27 November 2024 (UTC)
::I think what's happening is that people are having two simultaneous discussions – automatic identification of sockpuppets is already being done, but what people say "the WMF would never do" is using private data (e.g. IP addresses) to identify them. Which adds another level of (ethical, if not legal) complications compared to what SimilarEditors is doing (only processing data everyone can access, but in an automated way). ] (] · ]) 07:59, 28 November 2024 (UTC)
:::"automatic identification of sockpuppets is already being done" is probably an overstatement, but I agree that there may be a potential legal and ethical minefield between the Similarusers service that uses public information available to anyone from the databases after redaction of private information (i.e. course-grained sampling of revision timestamps combined with an attempt to quantify page intersection data), and a service that has access to the private information associated with a registered account name. ] (]) 11:15, 28 November 2024 (UTC)
:::The WMF said they're planning on incorporating IP addresses and device info as well! ] (]) 21:21, 29 November 2024 (UTC)
::Yes, automatic identification of (these) sockpuppets is impossible. There are many reasons for this, but the simplest one is this: These types of tools require hundreds of edits – at minimum – to return any viable data, and the sort of sockmasters who get accounts up to that volume of edits know how to evade detection by tools that analyse public information. The markers would likely indicate people from similar countries – naturally, two Cypriots would be interested in ] and over time similar hour and day overlaps will emerge, but what's to let you know whether these are actual socks when they're evading technical analysis? You're back to square one. There are other tools such as ] which I consider equally circumstantial; an analysis of myself returns a high likelihood of me being other administrators and arbitrators, while analysing an alleged sock currently at SPI returns the filer as the third most likely sockmaster. This is not commentary on the tools themselves, but rather simply the way things are. ]<sup>]</sup><sub>]</sub> 17:42, 28 November 2024 (UTC)
:::Oh, fun! Too bad it's CU-restricted, I'm quite curious to know what user I'm most stylometrically similar to. -- ] (]) 17:51, 28 November 2024 (UTC)
::::That would be {{noping|LittlePuppers}} and {{noping|LEvalyn}}. ]<sup>]</sup><sub>]</sub> 03:02, 29 November 2024 (UTC)
:::::Fascinating! One I've worked with, one I haven't, both AfC reviewers. Not bad. -- ] (]) 06:14, 29 November 2024 (UTC)
:::Idk, the half dozen ARBPIA socks I recently reported at SPI were obvious af to me, as are several others I haven't reported yet. That may be because that particular sockfarm is easy to spot by its POV pushing and a few other habits; though I bet in other topic areas it's the same. ] helps because it forces the socks to make 500 edits minimum before they can start POV pushing, but still we have to let them edit for a while post-XC just to generate enough diffs to support an SPI filing. Software that combines tools like Masz and SimilarEditor, and does other kinds of similar analysis, could significantly reduce the amount of editor time required to identify and report them. ] (]) 18:02, 28 November 2024 (UTC)
:::I think it is possible, studies have demonstrated that it is possible, but it is true that having a sufficient number of samples is critical. Samples can be aggregated in some cases. There are several other important factors too. I have tried some techniques, and sometimes they work, or let's say they can sometimes produce results consistent with SPI results, better than random, but with plenty of false positives. It is also true that there are a number of detection countermeasures (that I won't describe) that are already employed by some bad actors that make detection harder. But I think the objective should be modest, to just move a bit in the right direction by detecting more ban evading accounts than are currently detected, or at least to find ways to reduce the size of the search space by providing ban evasion candidates. Taking the human out of the detection loop might take a while. ] (]) 18:39, 28 November 2024 (UTC)
:::If you mean it's never going to be possible to catch ''some'' sockpuppets—the best-hidden, cleverest, etc. ones—you're completely correct. But I'm guessing we could cut the amount of time SPI has to spend dramatically with just some basic checks. ] (]) 02:27, 29 November 2024 (UTC)
::::I disagree. Empirically, the vast majority of time spent at SPI is not on finding possible socks, nor is it using the CheckUser tool on them, but rather it's the CU completed cases (of which there are currently 14 and I should probably stop slacking and get onto some) with non-definitive technical results waiting on an administrator to make the final determination on whether they're socks or not. Extension:SimilarUsers would concentrate various information that already exists (], RoySmith's ) in one place, but I wouldn't say the accessibility of these tools is a cause of SPI backlog. An AI analysis tool to give an accurate magic number for likelihood? I'm anything but a Luddite, but still believe that's wishful thinking. ]<sup>]</sup><sub>]</sub> 03:02, 29 November 2024 (UTC)
:::::Something seems better than nothing in this context doesn't it? EIA and the Similarusers service don't provide an estimate of the significance of page intersections. An intersection on a page with few revisions or few unique actors or few pageviews etc. is very different from a page intersection on the Donald Trump page. That kind of information is probably something that could sometimes help, even just to evaluate the importance of intersection evidence presented at SPIs. It seems to me that any kind of assistance could help. And another thing about the number of edits is that too many samples can also present challenges related to noise, with signals getting smeared out, although the type of noise in a user's data can itself be a characteristic signal in some cases it seems. And if there are too few samples, you can generate synthetic samples based on the actual samples and inject them into spaces. Search strategy matters a lot. The space of everyone vs everyone is vast, so good luck finding potential matches in that space without a lot of compute, especially for diffs. But many socks inhabit relatively small subspaces of Misplaced Pages, at least in the they edit(war)/POV-push etc. in their topic of interest. So, choosing the candidate search space and search strategy wisely can make the problem much more tractable for a given topic area/subspace. Targeted fishing by picking a potential sock and looking for potential matches (the strategy used by the Similarusers service and CU I guess) is obviously a very different challenge than large-scale industrial fishing for socks in general. ] (]) 04:08, 29 November 2024 (UTC)
:::::And to continue the whining about existing tools, EIA and the Similarusers service use a suboptimal strategy in my view. If the objective is page intersection information for a potential sock against a sockmaster, and a ban evasion source has employed n identified actors so far e.g. almost 50 accounts for Icewhiz, the source's revision data should be aggregated for the intersection. This is not difficult to do using the category graph and the logs. ] (]) 04:25, 29 November 2024 (UTC)
::::::There is so much more that could be done with the software. EIA gives you page overlaps (and isn't 100% accurate at it), but it doesn't tell you:
::::::*how many times the accounts made the same edits (tag team edit warring)
::::::*how many times they voted in the same formal discussions (RfC, AfD, RM, etc) and whether they voted the same way or different (vote stacking)
::::::*how many times they use the same language and whether they use unique phraseology
::::::*whether they edit at the same times of day
::::::*whether they edit on the same days
::::::*whether account creation dates (or start-of-regular-editing dates) line up with when other socks were blocked
::::::*whether they changed focus after reaching XC and to what extent (useful in any ARBECR area)
::::::*whether they "gamed" or "rushed" to XC (same)
::::::All of this (and more) would be useful to see in a combined way, like a dashboard. It might make sense to restrict access to such compilations of data to CUs, and the software could also throw in metadata or subscriber info in there, too (or not), and it doesn't have to reduce it all into a single score like ORES, but just having this info compiled in one place would save editors the time of having to compile it manually. If the software auto-swept logs for this info and alerted humans to any "high scores" (however defined, eg "matches across multiple criteria"), it would probably not only reduce editor time but also increase sock discovery. ] (]) 04:53, 29 November 2024 (UTC)
:::::::This is like one of my favorite strategies for meetings. Propose multiple things, many of which are technically challenging, then just walk out of the meeting.
:::::::The 'how many times the accounts made the same edits' is probably do-able because you can connect reverted revisions to the revisions that reverted them using json data in the database populated as part of the tagging system, look at the target state reverted to and whether the revision was an exact revert. ...or maybe not without computing diffs, having just looked at an article with a history of edit warring. ] (]) 07:43, 29 November 2024 (UTC)
:::::::I agree with Levivich that automated, privacy-protecting sock-detection is not a pipe dream. I proposed a system something like this in ], see also ], and more recently ]. However, it definitely requires a bit of software development and testing. It also requires the community and the foundation devs or product folks to prioritize the idea. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 02:27, 10 December 2024 (UTC)
*'''Comment'''. For some time I have vehemnently suspected that this site is crawling with massive numbers of sockpuppets, that the community seems to be unable or unwilling to recognise probable sockpuppets for what they are, and it is not feasible to send them to SPI one at a time. I see a large number of accounts that are sleepers, or that have low edit counts, trying to do things that are controversial or otherwise suspicious. I see them showing up at discussions in large numbers and in quick succession, and offering !votes consist of interpretations of our policies and guidelines that may not reflect consensus, or other statements that may not be factually accurate.
:I think the solution is simple: when closing community discussions, admins should look at the edit count of each !voter when determining how much weight to give his !vote. The lower the edit count, the greater the level of sleeper behaviour, and the more controversial the subject of the discussion is amongst the community, the less weight should be given to !vote.
:For example, if an account with less than one thousand edits !votes in a discussion about 16th century Tibetan manuscripts, we may well be able to trust that !vote, because the community does not care about such manuscripts. But if the same account !votes on anything connected with "databases" or "lugstubs", we should probably give that !vote very little weight, because that was the subject of a massive dispute amongst the community, and any discussion on that subject is not particulary unlikely to be crawling with socks on both sides. The feeling is that, if you want to be taken seriously in such a controversial discussion, you need to make enough edits to prove that you are a real person, and not a sock. ] (]) 15:22, 12 December 2024 (UTC)
::The site presumably has a large number of unidentified sockpuppets. As for the identified ban evading accounts, accounts categorized or logged as socks, if you look at 2 million randomly selected articles for the 2023-10-07 to 2024-10-06 year, just under 2% of the revisions are by ban evading actors blocked for sockpuppetry (211,546 revisions out of 10,732,361). A problem with making weight dependent on edit count is that the edit count number does not tell you anything about the probability that an account is a sock. Some people use hundreds of disposable accounts, making just a few edits with each account. Others stick around and make thousands of edits before they are detected. Also, Misplaced Pages provides plenty of tools that people can use to rapidly increase their edit count. ] (]) 16:12, 12 December 2024 (UTC)


*I strongly oppose any idea of mass-CUing any group of users, and I'm pretty sure the WMF does too. This isn't the right way to fight sockpuppets. ] (]) 14:35, 15 December 2024 (UTC)
First off, before anyone blows a gasket, this is a fairly tongue-in-cheek suggestion (I say "fairly" because I do sort of wish that something like this would happen). But, I'm quite aware of the perennial proposal which sort of addresses this.
::Can I ask why? Is it a privacy-based concern? IPs are automatically collected and stored for 90 days, and maybe for years in the backups, regardless of CUs. That's a 90 day window that a machine could use to do something with them without anyone running a CU and without anyone having to see what the machine sees. ] (]) 15:05, 15 December 2024 (UTC)
:::Primarily privacy concerns, as well as concerns about false positives. A lot of people here probably share an IP with other editors without even knowing it. I also would like to maintain my personal privacy, and I know many other editors would too. There are other methods of fighting sockpuppets that don't have as much collateral damage, and we should pursue those instead. ] (]) 15:16, 17 December 2024 (UTC)
:::Also, it wouldn't even work on some sockpuppets, because IP info is only retained for 90 days, so a blocked editor could just wait out the 90 days and then return with a new account. ] (]) 15:19, 17 December 2024 (UTC)
:@]—one situation where I think we could pull a ''lot'' of data, and probably detect tons of sockpuppets, is !votes like RfAs and RfCs. Those have a ''lot'' of data, in addition to a very strong incentive for socking—you'd expect to see a bimodal distribution where most accounts have moderately-correlated views, but a handful have extremely strong-correlations (always !voting the same way), more than could plausibly happen by chance or by overlapping views. For accounts in the latter group, we'd have strong grounds to suspect collusion/canvassing or socking.
:RfAs are already in a very nice machine-readable format. RfCs aren't, but most could easily be made machine-readable (by adopting a few standardized templates). We could also build a tool for semi-automated recoding of old RfCs to get more data. ] (]) 18:56, 16 December 2024 (UTC)
::Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors. Those are very well-attended. The same may apply to many RfCs. Perhaps the less well-attended ones might be affected, but the main challenge is article edits, which will not be similarly structured. ] (]) 19:13, 16 December 2024 (UTC)
:::{{tqb|Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors.}}
:::Given we've had situations of , I'm not too sure of this myself. If someone ''did'' create a bunch of socks, as some people have alleged in this thread, it'd be weird of them not to use those socks to influence policy decisions. I'm pretty skeptical, but I do think investigating would be a good idea (if nothing else because of how important it is—even the ''possibility'' of substantial RfA/RfC manipulation is quite bad, because it undermines the whole idea of consensus). ] (]) 21:04, 16 December 2024 (UTC)
::::RFAs, RfCs, RMs, AfDs, and arbcom elections. ] (]) 23:11, 17 December 2024 (UTC)


===What do we do with this information?===
Anyway, first I have a bit of an admission to make: I'm a horrible speller (realistically, if I'm not being self-deprecating, I'm probably slightly above average, mostly because I've become a decent typist over the years). It really makes little, if any, difference to me if the word describing "The spectral composition of visible light" is spelled "color" or "colour". To me, I personally learned "color", those who I have the most day-to-day exposure to use "color", and most importantly my (en-us) spellchecker ''doesn't flag "color"''! Realistically, while it may bug me for a short period of time to start seeing "colour" all over the place, it would be easy enough to get used to if it weren't flagged as a damned misspelling.
I think we've put the cart before the horse here a bit. While we've established it's possible to detect most sockpuppets automatically—and the WMF is already working on it—it's not clear what this would actually achieve, because having multiple accounts isn't against the rules. I think we'd need to establish a set of easy-to-enforce boundaries for people using multiple accounts. My proposal is to keep it simple—two accounts controlled by the same person can't edit the same page (or participate in the same discussion) without disclosing they're the same editor.] (]) 04:41, 14 December 2024 (UTC)


:This is already covered by ] I think. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 05:03, 14 December 2024 (UTC)
So, with the above established, I'd like to humbly suggest that we develop a "en-wp" dictionary, distribute it to anyone and everyone who will take it, and use that here. The hell with ENGVAR, we'll write our own variation and stick to it! While we're at it, we may as well lobby for the banning of ] and the ] as well. I think we've all had enough of their divisive shenanigans! Who's with me?<br/>— ] (]) 22:39, 25 January 2010 (UTC)
:Good idea /in theory/, but of course, who will decide if it's color or colour, and how much bitching will there be that 'their' way is the better way? ] (]) 23:03, 25 January 2010 (UTC) ::And as there are multiple legitimate ways to disclose, not all of which are machine readable, any automatically generated list is going to need human review. ] (]) 10:13, 14 December 2024 (UTC)
:::Yes, that's definitely the case, an automatic sock detection should probably never be an autoblock, or at least not unless there is a good reason in that specific circumstance, like a well-trained filter for a specific LTA. Having the output of automatic sock detection should still be restricted to CU/OS or another limited user group who can be trusted to treat possible user-privacy-related issues with discretion, and have gone through the appropriate legal rigmarole. There could also be some false positives or unusual situations when piloting a program like this. For example, I've seen dynamic IPs get assigned to someone else after a while, which is unlikely but not impossible depending on how an ISP implements DHCP, though I guess collisions become less common with IPV6. Or if the fingerprinting is implemented with a lot of datapoints to reduce the likelihood of false positives. ''']'''<span style="border:2px solid #073642;background:rgb(255,156,0);background:linear-gradient(90deg, rgba(255,156,0,1) 0%, rgba(147,0,255,1) 45%, rgba(4,123,134,1) 87%);">]</span> 10:31, 14 December 2024 (UTC)
::Heh, there's an easy solution to that, we simply develop our own spelling convention, just like our buddy ] did! For example, instead of "color" or "colour", we spell it "colur"! {{=)|grin}}
::::I think we are probably years away from being able to rely on autonomous agents to detect and block socks without a human in the loop. For now, people need as much help as they can get to identify ban evasion candidates. ] (]) 10:51, 14 December 2024 (UTC)
::In all seriousness, something along these lines is likely to happen "in the real world" eventually you know, if it's not already underway. The web being a written medium, which brings those of us in various disparate parts of the world together, simply has to have a profound impact on the development of the language and writing in particular. Of course, thre's quite a bit of "dwell time" to things which are put online, and that put together with the fact that we as people are naturally somewhat averse to change means that there certainly won't be a change overnight, and there likely will never be too drastic a change (for example: old English to modern English). A change will surely develop though, and likely as not to Webster's more "Americanized" spellings. I don't say that out of any sort of national pride or anything like that, it's just that "our" words our shorter (nevermind the fact that the 'net and media are flooded with American writing...). I can hear people howling about that through my computer, putting down "txt spk" and the like, but the fact is that groups of people will always seek the path of least resistance, and fewer characters to type is that path. Anyway, I'm not sure what prompted this interest in the subject, but I figured that I may as well talk about it. *shrug*<br/>— ] (]) 07:01, 26 January 2010 (UTC)
::::{{tqb|or at least not unless there is a good reason in that specific circumstance,}}
:::If the object is to create a dictionary which could be used in spellcheckers so that valid variants would not be flagged, the answer is simple, include both (or all valid) variants. Both "color" and "colour" would be included. Indeed, simply taking a good US English and a good UK English dictionary and merging them (and removing dupes) would be a good start. We might want to add Misplaced Pages-specific terms like "ArbCom" and "copyvio" that come up on talk pages a lot. ] ] 16:38, 31 January 2010 (UTC)
::::Yep, basically I'm saying we need to define "good reason". The obvious situation is automatically blocking socks of blocked accounts. I also think we should just automatically prevent detected socks from editing the same page (ideally make it impossible, to keep it from being done accidentally). ] (]) 17:29, 14 December 2024 (UTC)


== Requiring registration for editing ==
== Extension of "recent event" tag to cover programmes about to start a new series ==
:''{{small|{{a note}} This section was split off from ] and the "parenthetical comment" referred to below is: {{tqq|(Also, email-required registration and get rid of IP editing.)}}—03:49, 26 November 2024 (UTC)}}''
@], about your parenthetical comment on requiring registration:
{{pb}}Part of the eternally unsolvable problem is that new editors are frankly bad at it. I can give examples from my own editing: Create an article citing a personal blog post as the main source? Check. Merge two articles that were actually different subjects? Been there, done that, got the revert. Misunderstand and mangle wikitext? More times than I can count. And that's after I created my account. Like about half of experienced editors, I edited as an IP first, fixing a typo here or reverting some vandalism there.
{{pb}}But if we don't persist through these early problems, we don't get experienced editors. And if we don't get experienced editors, Misplaced Pages will die.
{{pb}}Requiring registration ("get rid of IP editing") shrinks the number of people who edit. The Portuguese Misplaced Pages banned IPs only from the mainspace three years ago. . After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. The number of contributions has dropped, too. They went from 160K to 210K edits per month down to 140K most months.
{{pb}}Some of the experienced editors have said that they like this. No IPs means less impulsive vandalism, and the talk pages are stable if you want to talk to the editor. Fewer newbies means I don't "have to" clean up after so many mistake-makers! Fewer editors, and especially fewer inexperienced editors, is more convenient – in the short term. But I wonder whether they're going to feel the same way a decade from now, when their community keeps shrinking, and they start wondering when they will lose critical mass.
{{pb}}The same thing happens in the real world, by the way. Businesses want to hire someone with experience. They don't want to train the helpless newbie. And then after years of everybody deciding that training entry-level workers is ], they all look around and say: Where are all the workers that I need? Why didn't someone else train the next generation while I was busy taking the easy path?
{{pb}}In case you're curious, there is a Misplaced Pages that puts all of the IP and newbie edits under "PC" type restrictions. Nobody can see the edits until they've been approved by an experienced editor. The rate of vandalism visible to ordinary readers is low. Experienced editors love the level of control they have. Have a look at during the last decade. Is that what you want to see here? If so, we know how to make that happen. The path to that destination even looks broad, easy, and paved with all kinds of good intentions. ] (]) 04:32, 23 November 2024 (UTC)
:Size isn't everything... what happened to their output--the quality of their encyclopedias--after they made those changes? ] (]) 05:24, 23 November 2024 (UTC)
::Well, I can tell you objectively that the number of edits declined, but "quality" is in the eye of the beholder. I understand that the latter community has the lowest use of inline citations of any mid-size or larger Misplaced Pages. What's now yesterday's TFA there wouldn't even be rated B-class here due to whole sections not having any ref tags. In terms of citation density, their FA standard is currently where ours was >15 years ago.
::But I think you have missed the point. Even if the quality has gone up according to the measure of your choice, if the number of contributors is steadily trending in the direction of zero, what will the quality be when something close to zero is reached? That community has almost halved in the last decade. How many articles are out of date, or missing, because there simply aren't enough people to write them? A decade from now, with half as many editors again, how much worse will the articles be? We're none of us idiots here. We can see the trend. We know that people die. You have doubtless seen this famous line:
::<blockquote>All men are mortal. Socrates is a man. Therefore, Socrates is mortal.</blockquote>
::I say:
::<blockquote>All Misplaced Pages editors are mortal. Dead editors do not maintain or improve Misplaced Pages articles. Therefore, maintaining and improving Misplaced Pages requires editors who are not dead.</blockquote>
::– and, ], we are going to die, my friend. ]. If we want Misplaced Pages to outlive us, we cannot be so shortsighted as to care only about the quality today, and never the quality the day after we die. ] (]) 06:13, 23 November 2024 (UTC)
:::Trends don't last forever. Enwiki's active user count decreased from its peak over a few years, then flattened out for over a decade. The quality increased over that period of time (by any measure). Just because these other projects have shed users doesn't mean they're doomed to have zero users at some point in the future. And I think there's too many variables to know how much any particular change made on a project affects its overall user count, nevermind the quality of its output. ] (]) 06:28, 23 November 2024 (UTC)
:::] If the graph to the right accurately reflects the age distribution of Misplaced Pages users, then a large chunk of the user base will die off within the next decade or two. Not to be dramatic, but I agree that requiring registration to edit, which will discourage readers from editing in the first place, will hasten the project's decline.... ] (]) 14:40, 23 November 2024 (UTC)
::::😂 Seriously? What do you suppose that chart looked like 20 years ago, and then what happened? ] (]) 14:45, 23 November 2024 (UTC)
:::::There are significantly more barriers to entry than there were 20 years ago, and over that time the age profile has increased (quite significantly iirc). Adding more barriers to entry is not the way to solve the issued caused by barriers to entry. ] (]) 15:50, 23 November 2024 (UTC)
::::::{{clear}}"" - maybe the demographics of the community will change. ] (]) 16:30, 23 November 2024 (UTC)
:::::::That talks about LLMs usage in artcles, not the users. <span style="color:#7E790E;">2601AC47</span> (]|]) <span style="font-size:80%"><span style="color:grey;">Isn't a IP anon</span></span> 16:34, 23 November 2024 (UTC)
::::::::Or you could say it's about a user called PaperQA2 that writes Misplaced Pages articles significantly more accurate than articles written by other users. ] (]) 16:55, 23 November 2024 (UTC)
:::::::::No, it is very clearly about a language model. As far as I know, PaperQA2, or WikiCrow (the generative model using PaperQA2 for question answering), has not actually been making any edits on Misplaced Pages itself. ] (] · ]) 16:58, 23 November 2024 (UTC)
::::::::::That is true. It is not making any edits on Misplaced Pages itself. There is a barrier. But my point is that in the future that barrier may not be there. There may be users like PaperQA2 writing articles better than other users and the demographics will have changed to include new kinds of users, much younger than us. ] (]) 17:33, 23 November 2024 (UTC)
:::::::::::And who will never die off! ] (]) 17:39, 23 November 2024 (UTC)
::::::::::::But which will not be ''Misplaced Pages''. ] (]) 06:03, 24 November 2024 (UTC)
:::::In re "What do you suppose that chart looked like 20 years ago": I believe that the numbers, very roughly, are that the community has gotten about 10 years older, on average, than it was 20 years ago. That is, we are bringing in some younger people, but not at a rate that would allow us to maintain the original age distribution. (Whether the original age distribution was a good thing is a separate consideration.) ] (]) 06:06, 24 November 2024 (UTC)
::::I like looking at the graph hosted on Toolforge (for anyone who might go looking for it later, there's a link to it at {{section link|Misplaced Pages:WikiProject Editor Retention|Resources}}). It shows histograms of how many editors have edited in each month, grouped by all the editors who started editing in the same month. The data is noisy, but it does seem to show an increase in editing tenure since 2020 (when the pursuit of a lot of solo hobbies picked up, of course). Prior to that, there does seem to be a bit of slow growth in tenure length since the lowest point around 2013. ] (]) 17:18, 23 November 2024 (UTC)
::::The trend is a bit clearer when looking at the . (To see the trend when looking at the , the default colour range needs to be shifted to accommodate the smaller numbers.) ] (]) 17:25, 23 November 2024 (UTC)
:::::I'd say that the story there is: Something amazing happened in 2006. Ours (since both of us registered our accounts that year) was the year from which people stuck around. I think that would be just about the time that the wall o' automated rejection really got going. There was some UPE-type commercial pressure, but it didn't feel unmanageable. It looks like an inflection point in retention. ] (]) 06:12, 24 November 2024 (UTC)
::::::I don't think something particularly amazing happened in 2006. I think the ] starting in 2004 attracted a large land rush of editors as Misplaced Pages became established as a top search result. The cohort of editors at that time had the opportunity to cover a lot of topics for the first time on Misplaced Pages, requiring a lot of co-ordination, which created bonds between editors. As topic coverage grew, there were fewer articles that could be more readily created by generalists, the land rush subsided, and there was less motivation for new editors to persist in editing. Boom-bust cycles are common for a lot of popular things, particularly in tech where newer, shinier things launch all the time. ] (]) 19:07, 24 November 2024 (UTC)
:::::::Ah yes, that glorious time when we gained an article on every Pokemon character and, it seems, every actor who was ever credited in a porn movie. Unfortunately, many of the editors I bonded with then are no longer active. Some are dead, some finished school, some presumably burned out, at least one went into the ministry. ] 23:49, 26 November 2024 (UTC)
:{{tq|Have a look at what happened to the size of their community.}}—I'm gonna be honest: eyeballing it, I don't actually see much (if any) difference with enwiki's activity. only convinces people when the dataset passes the interocular trauma test (e.g. ]).
:On the other hand, if there's a dataset of "when did $LANGUAGEwiki adopt universal pending changes protections", we could settle this argument once and for all using a real statistical model that can deliver precise effect sizes on activity. Maybe ''then'' we can all finally ]. ] (]) 18:08, 26 November 2024 (UTC)
:This is requested once or twice a year, and the answer will always be no. You would know this if you read ], as is requested at the top of this page ] (]) 08:09, 17 December 2024 (UTC)
This particular idea will not pass, but the binary developing in the discussion is depressing. A bargain where we allow IPs to edit (or unregistered users generally when IPs are masked), and therefore will sit on our hands when dealing with abuse and even harassment is a grim one. Any steps taken to curtail the second half of that bargain would make the first half stronger, and I am generally glad to see thoughts about it, even if they end up being impractical. ] (]) 02:13, 24 November 2024 (UTC)
:I don't want us to sit on our hands when we see abuse and harassment. I think our toolset is about 20 years out of date, and I believe there are things we could do that will help (e.g., ], cross-wiki checkuser tools for Stewards, detecting and responding to a little bit more information about devices/settings ). ] (]) 06:39, 24 November 2024 (UTC)
::Temporary accounts will help with the casual vandalism, but they’re not going to help with abuse and harassment. If it limits the ability to see ranges, it will make issues slightly worse. ] (]) 07:13, 24 November 2024 (UTC)
:::I'm not sure what the current story is there, but when I talked to the team last (i.e., in mid-2023), we were talking about the value of a tool that would do range-related work. For various reasons, this would probably be Toolforge instead of MediaWiki, and it would probably be restricted (e.g., to admins, or to a suitable group chosen by each community), but the goal was to make it require less manual work, particularly for cross-wiki abuse, and to be able to aggregate some information without requiring direct disclosure of some PII. ] (]) 23:56, 25 November 2024 (UTC)


Oh look, misleading statistics! "The Portuguese Misplaced Pages banned IPs only from the mainspace three years ago. . After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. " ''Of course'' you have a spike in new registrations soon after you stop allowing IP edits, and you can't sustain that spike. That is not evidence of anything. It would have been more honest and illustrative to show the graph before and after the policy change, not only afterwards, e.g. . Oh look, banning IP editing has resulted in on average some 50% ''more'' registered editors than before the ban. Number of active editors is up 50% as well. The number of new pages has stayed the same. Number of edits is down, yes, but how much of this is due to less vandalism / vandalism reverts? A lot apparently, as the count of user edits has stayed about the same. Basically, from those statistics, used properly, it is impossible to detect any issues with the Portuguese Misplaced Pages due to the banning of IP editing. ] (]) 08:55, 26 November 2024 (UTC)
For contemporary events in the news, there is often a tag at the head of the article, stating that the article covers a contemporary event and that information may change with the passage of time. In the same way,does any one think it may be worth having a similar tag at the start of articles on radio or television series about to begin a new series? In my home country of the ], on ], a new series of ] is going to begin this week (i.e. the week beginning January 25 2010) and it would be nice if there were a tag at the head of the article stating something like: "This article is on a programme about to begin a new series. Information may change as the series progresses". ] (]) 00:02, 26 January 2010 (UTC)
:I think that's a great idea. I also think that it's a great idea to add an optional link to Wikinews so that relevant stories could be linked to from the mbox as well. Both of those ideas seem to meet with resistance though, just so you're aware of it. I'd go ahead and create the template, just don't be surprised when someone sends it to TFD is all.<br/>— ] (]) 07:05, 26 January 2010 (UTC)
::{{tl|future television}} used to do this, and it was deleted: ]. —] (] • ]) 12:33, 26 January 2010 (UTC)
:Theoretically, the current event templates are for use on articles where information is changing rapidly. I can't see a situation where the progression of a television show's season would necessitate this. What I could see though, is if there is a main article for the show, and a child article for the season, using something along the lines of {{tn|Current sport-related}} to point to the child article for updated information. ]] 15:54, 26 January 2010 (UTC)
::It seems to me that any series that is still running is in that sense a current event. ] (]) 17:57, 28 January 2010 (UTC)


:"how much of this is due to less vandalism / vandalism reverts?" That's a good question. Do we have some data on this? ] (]) 09:20, 26 November 2024 (UTC)
:I'm not really sure why this would be necessary would be most programmes. The only purpose I can see for it is if the TV programme's plot summary is in lengthy prose as oppose to an episode by episode table. This would mean that the prose is subject to change, whereas an episode table means the information can be edited for each episode, meaning the template is unnecessary. ] (]) 12:28, 31 January 2010 (UTC)
::{{Ping|Jo-Jo Eumerus}}, the dashboard is although it looks like they stopped reporting the data in late 2021. If you take "Number of reverts" as a proxy for vandalism, you can see that the block shifted the number of reverts from a higher equilibrium to a lower one, while overall non-reverted edits does not seem to have changed significantly during that period. ] (]) 11:44, 28 November 2024 (UTC)
:::Upon thinking, it would be useful to know how many ''good'' edits are done by IP. Or as is, unreverted edits. ] (]) 14:03, 30 November 2024 (UTC)
:I agree that one should expect a spike in registration. (In fact, I have suggested a strictly temporary requirement to register – a few hours, even – to push some of our regular IPs into creating accounts.) But once you look past that initial spike, the trend is downward. ] (]) 05:32, 29 November 2024 (UTC)
::{{tqb|But once you look past that initial spike, the trend is downward.}}
::I still don't see any evidence that this downward trend is unusual. Apparently the WMF ] and didn't find evidence of a drop in activity. Net edits (non-revert edits standing for at least 48 hours) increased by 5.7%, although edits across other wikis increased slightly more. The impression I get is any effects are small either way—the gains from freeing up anti-vandalism resources basically offset the cost of some IP editors not signing up.
::TBH this lines up with what I'd expect. Very few people I talk to cite issues like "creating an account" as a major barrier to editing Misplaced Pages. The most common barrier I've heard from people who tried editing and gave it up is "Oh, I tried, but then some random admin reverted me, linked me to ], and told me to go fuck myself but with less expletives." ] (]) 07:32, 29 November 2024 (UTC)
::{{tqb|But once you look past that initial spike, the trend is downward.}} Not really obvious, and not more or even less so in Portuguese wikipedia than in e.g. Enwiki, FRwiki, NLwiki, ESwiki, Svwiki... So, once again, these statistics show ''no issue at all'' with disabling IP editing on Portuguese Misplaced Pages. ] (]) 10:38, 29 November 2024 (UTC)
Aside from the obvious loss of good 'IP' editors, I think there's a risk of unintended consequences from 'stopping vandalism' at all; 'vandalism' and 'disruptive editing' from IP editors (or others) isn't ''necessarily'' a bad thing, long term.
Even the worst disruptive editors 'stir the pot' of articles, bringing attention to articles that need it, and otherwise would have gone unnoticed. As someone who mostly just trawls through recent changes, I can remember dozens of times when where an IP, or brand new, user comes along and breaks an article entirely, but their edit leads inexorably to the article being improved. Sometimes there is a glimmer of a good point in their edit, that I was able to express better than they were, maybe in a more balanced or neutral way. Sometimes they make an entirely inappropriate edit, but it brings the article to the top of the list, and upon reading it I notice a number of other, previously missed, problems in the article. Sometimes, having reverted a disruptive change, I just go and add some sources or fix a few typos in the article before I go on my merry way.
You might think 'Ah, but 'Random article' would let you find those problems too. BUT random article' is, well, random. IP editors are more targeted, and that someone felt the need to disparage a certain person's mother in fact brings attention to an article about someone who is, unbeknownst to us editors, particularly contentious in the world of Czech Jazz Flautists so there is a lot there to fix. By stopping people making these edits, we risk a larger proportion of articles becoming an entirely stagnant. ] 15:00, 9 December 2024 (UTC)


:I feel that ] has been too clever by half here. "Ahh, but vandalism of articles stimulates improvements to those articles." If the analysis ends there, I have no objections. But if, on the other hand, you come to the conclusion that it is a good thing to vandalize articles, that it causes information to circulate, and that the encouragement of editing in general will be the result of it, you will oblige me to call out, "Halt! Your theory is confined to that which is seen; it takes no account of that which is not seen." If I were to pay a thousand people to vandalize Misplaced Pages articles full-time, bringing more attention to them, would I be a hero or villain? If vandalism is good, why do we ban vandals instead of thanking them? Because vandalism is bad—every hour spent cleaning up after a vandal is one not spent writing a new article or improving an existing one.
== WebCite for New York Times ==
:On targeting: vandals are more targeted than a "random article", but are far more destructive than basic tools for prioritizing content, and less effective than even very basic prioritization tools like sorting articles by total views. ] (]) 19:11, 9 December 2024 (UTC)
::I mean, I only said Vandalism "isn't necessarily a bad thing, long term", I don't think it's completely good, but maybe I should have added 'in small doses', fixing vandalism takes one or two clicks of the mouse in most cases and it seems, based entirely on my anecdotal experience, to sometimes have surprisingly good consequences; intuitively, you wouldn't prescribe vandalism, but these things have a way of finding a natural balance, and what's intuitive isn't necessarily what's right. One wouldn't prescribe dropping asteroids on the planet you're trying to foster life upon after you finally got it going, but we can be pretty happy that it happened! - And 'vandalism' is the very worst of what unregistered (and registered!) users get up to, there are many, many more unambiguously positive contributors than unambiguously malicious. ] 20:03, 9 December 2024 (UTC)
:::{{tqb|intuitively, you wouldn't prescribe vandalism}}
:::Right, and I think this is mainly the intuition I wanted to invoke here—"more vandalism would be good" a bit too galaxy-brained of a take for me to find it compelling without some strong empirical evidence to back it up.
:::Although TBH, I don't see this as a big deal either way. We already have to review and check IP edits for vandalism; the only difference is whether that content is displayed while we wait for review (with pending changes, the edit is hidden until it's reviewed; without it, the edit is visible until someone reviews and reverts it). This is unlikely to substantially affect contributions (the only difference on the IP's end is they have to wait a bit for their edit to show up) or vandalism (since we already ''de facto'' review IP edits). ] (]) 04:29, 14 December 2024 (UTC)


== Revise ] ==
The '']'' is one of the more widely cited sources on Misplaced Pages, partly because it's freely accessible. Recently it was announced that this will change from 2011, so efforts should be made to use ] to preserve access to key sources for ongoing content ] and expansion. This will be particularly necessary, perhaps, for ''old'' NYT sources (pre-1990, let's say), where there are less likely to be good alternatives online.


Point 1 of Procedural removal for inactive administrators which currently reads "Has made neither edits nor administrative actions for at least a 12-month period" should be replaced with "Has made no administrative actions for at least a 12-month period". The current wording of 1. means that an Admin who takes no admin actions keeps the tools provided they make at least a few edits every year, which really isn't the point. The whole purpose of adminship is to protect and advance the project. If an admin isn't using the tools then they don't need to have them. ] (]) 07:47, 4 December 2024 (UTC)
Is there (or should there be) some wikiproject or taskforce to take this on? Just spreading knowledge of WebCite would be a start, eg making a ] and spreading that around. ] <sup>]</sup> 16:14, 26 January 2010 (UTC)
:see also ]. ] <sup>]</sup> 16:16, 26 January 2010 (UTC)
:Somebody ought to negotiate free link access to NYT from WP.--] (]) 07:53, 30 January 2010 (UTC)


===Endorsement/Opposition (Admin inactivity removal) ===
== Requested moves for templates ==
*'''Support''' as proposer. ] (]) 07:47, 4 December 2024 (UTC)
*'''Oppose''' - this would create an unnecessary barrier to admins who, for real life reasons, have limited engagement for a bit. Asking the tools back at BN can feel like a faff. Plus, logged admin activity is a poor guide to actual admin activity. In some areas, maybe half of actions aren't logged? ] (]) 19:17, 4 December 2024 (UTC)
*'''Oppose'''. First, not all admin actions are logged as such. One example which immediately comes to mind is declining an unblock request. In the logs, that's just a normal edit, but it's one only admins are permitted to make. That aside, if someone has remained at least somewhat engaged with the project, they're showing they're still interested in returning to more activity one day, even if real-life commitments prevent them from it right now. We all have things come up that take away our available time for Misplaced Pages from time to time, and that's just part of life. Say, for example, someone is currently engaged in a PhD program, which is a tremendously time-consuming activity, but they still make an edit here or there when they can snatch a spare moment. Do we really want to discourage that person from coming back around once they've completed it? ] <small><sup>]</sup></small> 21:21, 4 December 2024 (UTC)
*:We could declare specific types of edits which count as admin actions despite being mere edits. It should be fairly simple to write a bot which checks if an admin has added or removed specific texts in any edit, or made any of specific modifications to pages. Checking for protected edits can be a little harder (we need to check for protection at the time of edit, not for the time of the check), but even this can be managed. Edits to pages which match specific regular expression patterns should be trivial to detect. ] ] 11:33, 9 December 2024 (UTC)
*'''Oppose''' There's no indication that this is a problem needs fixing. ]] <small><sup>Shoot Blues, Tell VileRat!</sup></small> 00:55, 5 December 2024 (UTC)
* '''Support''' Admins who don't use the tools should not have the tools. ] ] 03:55, 5 December 2024 (UTC)
*'''Oppose''' While I have never accepted "not all admin actions are logged" as a realistic reason for no logged actions in an entre year, I just don't see what problematic group of admins this is in response to. Previous tweaks to the rules were in response to admins that seemed to be gaming the system, that were basically inactive and when they did use the tools they did it badly, etc. We don't need a rule that ins't pointed a provable, ongoing problem. ] ] 19:19, 8 December 2024 (UTC)
*'''Oppose''' If an admin is still editing, it's not unreasonable to assume that they are still up to date with policies, community norms etc. I see no particular risk in allowing them to keep their tools. ] (]) 19:46, 8 December 2024 (UTC)
*'''Oppose''': It feels like some people are trying to accelerate admin attrition and I don't know why. This is a solution in search of a problem. ] (]) 07:11, 10 December 2024 (UTC)
*'''Oppose''' Sure there is a problem, but the real problem I think is that it is puzzling why they are still admins. Perhaps we could get them all to make a periodic 'declaration of intent' or some such every five years that explains why they want to remain an admin. ] (]) 19:01, 11 December 2024 (UTC)
*'''Oppose''' largely per scribolt. We want to take away mops from inactive accounts where there is a risk of them being compromised, or having got out of touch with community norms, this proposal rather targets the admins who are active members of the community. Also declining incorrect deletion tags and AIV reports doesn't require the use of the tools, doesn't get logged but is also an important thing for admins to do. '']]<span style="color:#CC5500">Chequers</span>'' 07:43, 15 December 2024 (UTC)
*'''Oppose'''. What is the motivation for this frenzy to make more hoops for admins to jump through and use not jumping through hoops as an excuse to de-admin them? What problem does it solve? It seems counterproductive and de-inspiring when the bigger issue is that we don't have enough new admins. —] (]) 07:51, 17 December 2024 (UTC)
*'''Oppose''' Some admin actions aren't logged, and I also don't see why this is necessary. Worst case scenario, we have ]. ] (]) 15:25, 17 December 2024 (UTC)
*'''Oppose''' I quite agree with David Eppstein's sentiment. What's with the rush to add more hoops? Is there some problem with the admin corps that we're not adequately dealing with? Our issue is that we have too few admins, not that we have too many. ] <sup>]</sup>] 23:20, 22 December 2024 (UTC)


===Discussion (Admin inactivity removal)===
There's a discussion that has started at ] regarding which process would be best to use for moving/renaming Templates. Participation there would be appreciated.<br/>— ] (]) 16:30, 26 January 2010 (UTC)
* Making administrative actions can be helpful to show that the admin is still up-to-date with community norms. We could argue that if someone is active but doesn't use the tools, it isn't a big issue whether they have them or not. Still, the tools can be requested back following an inactivity desysop, if the formerly inactive admin changes their mind and wants to make admin actions again. For now, I don't see any immediate issues with this proposal. ] (] · ]) 08:13, 4 December 2024 (UTC)
* Looking back at previous RFCs, in ] the reasoning was to reduce the attack surface for inactive account takeover, and in ] it was about admins who haven't been around enough to keep up with changing community norms. What's the justification for this besides "use it or lose it"? Further, we already have a mechanism (from the 2022 RFC) to account for admins who make a few edits every year. ]] 12:44, 4 December 2024 (UTC)
* I also note that not all admin actions are logged. Logging editing through full protection requires ]. Reviewing of deleted content isn't logged at all. Who will decide whether an admin's XFD "keep" closures are really ]s or not? Do adminbot actions count for the operator? There are probably more examples. Currently we ignore these edge cases since the edits will probably also be there, but now if we can desysop someone who made 100,000 edits in the year we may need to consider them. ]] 12:44, 4 December 2024 (UTC)
*:I had completely forgotten that many admin actions weren't logged (and thus didn't "count" for activity levels), that's actually a good point (and stops the "community norms" arguments as healthy levels of community interaction can definitely be good evidence of that). And, since admins desysopped for inactivity can request the tools back, an admin needing the bit but not making any logged actions can just ask for it back. At this point, I'm not sure if there's a reason to go through the automated process of desysopping/asking for resysop at all, rather than just politely ask the admin if they still need the tools.{{pb}}I'm still very neutral on this by virtue of it being a pretty pointless and harmless process either way (as, again, there's nothing preventing an active admin desysopped for "inactivity" from requesting the tools back), but I might lean oppose just so we don't add a pointless process for the sake of it. ] (] · ]) 15:59, 4 December 2024 (UTC)
* To me this comes down to whether the community considers it problematic for an admin to have tools they aren't using. Since it's been noted that not all admin actions are logged, and an admin who isn't using their tools also isn't causing any problems, I'm not sure I see a need to actively remove the tools from an inactive admin; in a worst-case scenario, isn't this encouraging an admin to (potentially mis-)use the tools solely in the interest of keeping their bit? There also seems to be somewhat of a bad-faith assumption to the argument that an admin who isn't using their tools may also be falling behind on community norms. I'd certainly like to hope that if I was an admin who had been inactive that I would review P&G relevant to any admin action I intended to undertake before I executed. ] (]) 15:14, 4 December 2024 (UTC)
* As I have understood it, the original rationale for desysopping after no activity for a year was the perception that an inactive account was at higher danger of being hijacked. It had nothing to do with how often the tools were being used, and presumably, if the admin was still editing, even if not using the tools, the account was less likely to be hijacked. - ] 22:26, 4 December 2024 (UTC)
*:And also, if the account of an active admin ''was'' hijacked, both the account owner and those they interact with regularly would be more likely to notice the hijacking. The sooner a hijacked account is identified as hijacked, the sooner it is blocked/locked which obviously minimises the damage that can be done. ] (]) 00:42, 5 December 2024 (UTC)
*I was not aware that not all admin actions are logged, obviously they should all be correctly logged as admin actions. If you're an Admin you should be doing Admin stuff, if not then you obviously don't need the tools. If an Admin is busy IRL then they can either give up the tools voluntarily or get desysopped for inactivity. The "Asking the tools back at BN can feel like a faff." isn't a valid argument, if an Admin has been desysopped for inactivity then getting the tools back '''should''' be "a faff". Regarding the comment that "There's no indication that this is a problem needs fixing." the problem is Admins who don't undertake admin activity, don't stay up to date with policies and norms, but don't voluntarily give up the tools. The ] change was about total edits over 5 years, not specifically admin actions and so didn't adequately address the issue. ] (]) 03:23, 5 December 2024 (UTC)
*:{{tpq|obviously they should all be correctly logged as admin actions}} - how ''would'' you log actions that are administrative actions due to context/requiring passive use of tools (viewing deleted content, etc.) rather than active use (deleting/undeleting, blocking, and so on)/declining requests where accepting them would require tool use? (e.g. closing various discussions that really shouldn't be NAC'd, reviewing deleted content, declining page restoration) Maybe there are good ways of doing that, but I haven't seen any proposed the various times this subject came up. Unless and until "soft" admin actions are actually logged somehow, "editor has admin tools and continues to engage with the project by editing" is the closest, if very imperfect, approximation to it we have, with criterion 2 sort-of functioning to catch cases of "but these specific folks edit so little over a prolonged time that it's unlikely they're up-to-date and actively engaging in soft admin actions". (I definitely do feel '''criterion 2''' could be significantly stricter, fwiw) ]] 05:30, 5 December 2024 (UTC)
*::Not being an Admin I have no idea how their actions are or aren't logged, but is it a big ask that Admins perform at least a few logged Admin actions in a year? The "imperfect, approximation" that "editor has admin tools and continues to engage with the project by editing" is completely inadequate to capture Admin inactivity. ] (]) 07:06, 6 December 2024 (UTC)
*:::Why is it "completely inadequate"? ] (]) 10:32, 6 December 2024 (UTC)
*::::I've been a "hawk" regarding admin activity standards for a very long time, but this proposal comes off as half-baked. The rules we have now are the result of careful consideration and incremental changes aimed at specific, ''provable'' issues with previous standards. While I am not a proponent of "not all actions are logged" as a blanket excuse for no logged actions in several years, it is feasible that an admin could be otherwise fully engaged with the community while not having any logged actions. We haven't been having trouble with admins who would be removed by this, so where's the problem? ] ] 19:15, 8 December 2024 (UTC)


== Give New Editors A List OF Easy Tasks == == "Blur all images" switch ==


Although i know that ], i propose that the Vector 2022 and Minerva Neue skins (+the Misplaced Pages mobile apps) have a "blur all images" toggle that blurs all the images on all pages (requiring clicking on them to view them), which simplifies the process of doing ] as that means:
When a new user signs up and
#You don't need to create an account to hide all images.
# ''edits their user page'' (unlike drive-by vandals), or
#You don't need any complex JavaScript or CSS installation procedures. Not even browser extensions.
# goes to their "my contributions" page
#You can blur all images in the mobile apps, too.
a set of links should be temporarily added to the bottom of that page. Those links would include simple, easy maintenance tasks and HowTos that can help them get up to speed as a contributing editor.
#It's all done with one push of a button. No extra steps needed.
#Blurring all images > hiding all images. The content of a blurred image could be easily memorized, while a completely hidden image is difficult to compare to the others.
And it shouldn't be limited to just Misplaced Pages. This toggle should be available on all other WMF projects and MediaWiki-powered wikis, too.
] (]) 15:26, 5 December 2024 (UTC)
:Sounds good. ] will be thrilled. ] (]) 15:29, 5 December 2024 (UTC)
:Sounds like something I can try to make a demo of as a userscript! ] (] · ]) 15:38, 5 December 2024 (UTC)
::] should do the job, although I'm not sure how to deal with the Page Previews extension's images. ] (] · ]) 16:16, 5 December 2024 (UTC)
:::Wow, @], is that usable on all skins/browsers/devices? If so, we should be referring people to it from everywhere instead of the not-very-helpful ], which I didn't even bother to try to figure out. ] (]) 15:00, 17 December 2024 (UTC)
::::I haven't tested it beyond my own setup, although I can't see reasons why it wouldn't work elsewhere. However, there are two small bugs I'm not sure how to fix: when loading a new page, the images briefly show up for a fraction of a second before being blurred; and the images in Page Previews aren't blurred (the latter, mostly because I couldn't get the html code for the popups). ] (] · ]) 16:57, 17 December 2024 (UTC)
:::::Ah, yes, I see both of those. Probably best to get at least the briefly-showing bug fixed before recommending it generally. The page previews would be good to fix but may be less of an issue for recommending generally, since people using that can be assumed to know how to turn it off. ] (]) 18:28, 17 December 2024 (UTC)
::::::I don't think there's a way to get around when the Javascript file is loaded and executed. I think users will have to modify their personal CSS file to blur images on initial load, much like the solution described at {{section link|Help:Options to hide an image|Hide all images until they are clicked on}}. ] (]) 18:41, 17 December 2024 (UTC)
::::@] -- the issue with a script would be as follows:
::::# Even for logged-in users, user scripts are a moderate barrier to install (digging through settings, or worse still, having to copy-paste to the JS/CSS user pages).
::::# The majority of readers do not have an account, and the overwhelming majority of all readers make zero edits. For many people, it's too much of a hassle to sign up (or they can't remember their password, or a number of other reasons etc, etc)
::::What all readers and users have, though, is this menu: ]
::::I say instead of telling the occasional IP or user who complains to install a script (there are probably many more people who object to NOTCENSORED, but don't want to or don't know how to voice objections), we could add the option to replace all images with a placeholder (or blur) and perhaps also an option to increase thumbnail size.
::::On the image blacklist aspect, doesn't ] have a script that hides potentially offensive images? I've not a clue how it works, but perhaps it could be added to the appearance menu (I don't support this myself, for a number of reasons)
::::''']]''' 18:38, 17 December 2024 (UTC)
::::: That's ], which is already listed on ]. I wrote it a long time ago as a joke for one of these kinds of discussions: it does very well at hiding all "potentially offensive" images because it hides all images. But people who want to have to click to see any images found it useful enough to list it on ]. ]] 22:52, 17 December 2024 (UTC)
::::::Out of curiosity, how does it filter for potentially offensive images? The code at user:Anomie/hide-images.js seems rather minimal (as I write this, I realize it may work by hiding ''all'' images, so I may have answered my own question). ''']]''' 22:58, 17 December 2024 (UTC)
:::::::{{tq|because it hides all images}} ] (]) 23:11, 17 December 2024 (UTC)
:Will be a problem for non registered users, as the default would clearly to leave images in blurred for them.<span id="Masem:1733413219582:WikipediaFTTCLNVillage_pump_(proposals)" class="FTTCmt"> —&nbsp;] (]) 15:40, 5 December 2024 (UTC)</span>
::Better show all images by default for all users. If you clear your cookies often you can simply change the toggle every time. ] (]) 00:07, 6 December 2024 (UTC)
:::That's my point: if you are unregistered, you will see whatever the default setting is (which I assume will be unblurred, which might lead to more complaints). We had similar problems dealing with image thumbnail sizes, a setting that unregistered users can't adjust. ] (]) 01:10, 6 December 2024 (UTC)
::::I'm confused about how this would lead to more complaints. Right now, logged-out users see every image without obfuscation. After this toggle rolls out, logged-out users would still see every image without obfuscation. What fresh circumstance is leading to new complaints? <span style="position: relative; top: -0.5em;">꧁</span>]<span style="position: relative; top: -0.5em;">꧂</span> 07:20, 12 December 2024 (UTC)
:::::Well, we'd be putting in an option to censor, but not actively doing it. People will have issues with that. '''] <sup>(] • ])</sup>''' 10:37, 12 December 2024 (UTC)
::::::Isn't the page ] "an option to censor" we've put in? ] (]) 11:09, 12 December 2024 (UTC)
:I'm not opposed to this, if it can be made to work, fine. ] (]) 19:11, 5 December 2024 (UTC)
:What would be the goal of a blur all images option? It seems too tailored. But a "hide all images" could be suitable. ] (]) 06:40, 11 December 2024 (UTC)
::Simply removing them might break page layout, so images could be replaced with an equally sized placeholder. ''']]''' 13:46, 13 December 2024 (UTC)


Could there be an option to simply not load images for people with a low-bandwidth connection or who don't want them? ] (]) 16:36, 5 December 2024 (UTC)
I've had a login here for years. It's been like pulling teeth to find anything worth editing - I don't go wandering through random stuff that I'm not interested in without someone saying "Hey, this needs spelling or grammar checking, source verification, bit rot checking," etc. So my login has sat unused.


:'''I agree'''. This way, the options would go as
It's not that I'm incapable, but I'm not going to drill down into a bunch of trivia to try and find a single page to edit. I have yet to find even a list of pages that need checking, if there is one! The only reason I found the "Village Pump" is that a friend told me that was what the suggestion board was called here. '''Otherwise I would have spent even more years not knowing it was even there!'''
:*Show all images
:*Blur all images
:*Hide all images
:It would honestly be better with your suggestion. ] (]) 00:02, 6 December 2024 (UTC)
::Of course, it will do nothing to appease the "These pics shouldn't be on WP at all" people. ] (]) 06:52, 6 December 2024 (UTC)
:::“Commons be thataway” is what we should tell them ] (]) 18:00, 11 December 2024 (UTC)
::I suggest that the "hide all images" display file name if possible. Between file name and caption (which admittedly are often similar, but not always), there should be sufficient clue whether an image will be useful (and some suggestion, but not reliably so, if it may offend a sensibility.) -- ] (]) 17:59, 11 December 2024 (UTC)
:For low-bandwidth ''or expensive bandwidth'' -- many folks are on mobile plans which charge for bandwidth. -- ] (]) 14:28, 11 December 2024 (UTC)


Regarding not limiting image management choices to Misplaced Pages: that's why it's better to manage this on the client side. Anyone needing to limit their bandwidth usage, or to otherwise decide individually on whether or not to load each photo, will likely want to do this generally in their web browsing. ] (]) 18:43, 6 December 2024 (UTC)
The presence of simple yet neglected task lists for new editors to do, along with HowTo guides, would help people to feel useful and contribute more.
:Definitely a browser issue. You can get plug-ins for Chrome right now that will do exactly this, and there's no need for Misplaced Pages/Mediawiki to implent anything. &mdash; ] (]) 18:48, 6 December 2024 (UTC)
I propose something a bit different: all images on the bad images list can only be viewed with a user account that has been verified to be over 18 with government issued ID. I say this because in my view there is absolutely no reason for a minor to view it. ] (]) 23:41, 8 December 2024 (UTC)
:Well, that means readers will be forced to not only create an account, but also disclose sensitive personal information, just to see encyclopedic images. That is pretty much the opposite of a 💕. ] (] · ]) 23:44, 8 December 2024 (UTC)
::I can support allowing users to opt to blu4 or hide some types of images, but this needs to be an opt-in only. By default, show all images. And I'm also opposed to any technical restriction which requires self-identification to overcome, except for cases where the Foundation deems it necessary to protect private information (checkuser, oversight-level hiding, or emails involving private information). Please also keep in mind that even if a user sends a copy of an ID which indicates the individual person's age, there is no way to verify that it was the user's own ID whuch had been sent. ] ] 11:25, 9 December 2024 (UTC)
:::Also, the ] is a really terrible standard. ] is completely harmless content that ''happened'' to be abused. And even some of the “NSFW” images are perfectly fine for children to view, for example ]. Are we becoming Texas or Florida now? ] (]) 18:00, 11 December 2024 (UTC)
::::You could've chosen a much better example like dirty toilet or the flag of Hezbollah... ] (]) 19:38, 11 December 2024 (UTC)
:::::Well, yes, but I rank that as “harmless”. I don’t know why anyone would consider a woman with her newborn baby so inappropriate for children it needs to be censored like hardcore porn. ] (]) 14:53, 12 December 2024 (UTC)
:::::The Hezbollah flag might be blacklisted because it's copyrighted, but placed in articles by uninformed editors (though one of JJMC89's bots automatically removes NFC files from pages). We have ] for those uses. ''']]''' 16:49, 13 December 2024 (UTC)
:I '''support''' this proposal. It’s a very clean compromise between the “think of the children” camp and the “freeze peach camp”. ] (]) 17:51, 11 December 2024 (UTC)
:Let me dox myself so I can view this image. Even Google image search doesn't require something this stringent. '''] <sup>(] • ])</sup>''' 19:49, 11 December 2024 (UTC)
:'''oppose''' should not be providing toggles to censor. ] (]) 15:15, 12 December 2024 (UTC)
:What about an option to disable images entirely? It might use significantly less data. ''']]''' 02:38, 13 December 2024 (UTC)
::This is an even better idea as an opt-in toggle than the blur one. Load no images by default, and let users click a button to load individual images. That has a use beyond sensitivity. <span style="position: relative; top: -0.5em;">꧁</span>]<span style="position: relative; top: -0.5em;">꧂</span> 02:46, 13 December 2024 (UTC)
:::Yes I like that idea even better. I think in any case we should use alt text to describe the image so people don’t have to play Russian roulette based on potentially vague or nonexistent descriptions, i.e. without alt text an ignorant reader would have no idea the album cover for ] depicts a nude child in a… ''questionable'' pose. ] (]) 11:42, 13 December 2024 (UTC)
::An option to replace images with alt text seems both much more useful and much more neutral as an option. There are technical reasons why a user might want to not load images (or only selectively load them based on the description), so that feels more like a neutral interface setting. An option to blur images by default sends a stronger message that images are dangerous.--] (]) 16:24, 13 December 2024 (UTC)
:::Also it'd negate the bandwidth savings somewhat (assuming an image is displayed as a low pixel-count version). I'm of the belief that Misplaced Pages should have more features tailored to the reader. ''']]''' 16:58, 13 December 2024 (UTC)
::::At the very least, add a filter that allows you to block all images on the bad image list, specifically that list and those images. To the people who say you shouldnt have to give up personal info, I say that we should go the way Roblox does. Seems a bit random, hear me out: To play 17+ games, you need to verify with gov id, those games have blood alcohol, unplayable gambling and "romance". I say that we do the same. Giving up personal info to view bad things doesn't seem so bad to me. ] (]) 03:44, 15 December 2024 (UTC)
:::::Building up a database of people who have applied to view bad things on a service that's available in restrictive regimes sounds like a way of putting our users in danger. -- ] (]) 07:13, 15 December 2024 (UTC)
:::::Roblox =/= Misplaced Pages. I don’t know why I have to say this, nor did I ever think I would. And did you read what I already said about the “bad list”? Do you want people to have to submit their ID to look at poop, a woman with her baby, the Hezbollah flag, or ]? How about we age-lock articles about adult topics next? ] (]) 15:55, 15 December 2024 (UTC)
:::::Ridiculous. '''] <sup>(] • ])</sup>''' 16:21, 15 December 2024 (UTC)
:::::So removing a significant thing that makes Misplaced Pages free is worth preventing underaged users from viewing certain images? I wouldn't say that would be a good idea if we want to make Misplaced Pages stay successful. If a reader wants to read an article, they should expect to see images relevant to the topic. This includes topics that are usually considered NSFW like ], ], et cetera. If a person willingly reads an article about an NSFW topic, they should acknowledge that they would see topic-related NSFW images. <span style="font-family:Times New Roman;font-size:100%;color:#00008B;background-color:transparent;;CSS">]]</span> 16:45, 15 December 2024 (UTC)
:::::What "bad things"? You haven't listed any. --] (]) (]) 15:57, 17 December 2024 (UTC)
:::::This is moot. Requiring personal information to use Misplaced Pages isn't something this community even has the authority to do. ] (]) 16:23, 17 December 2024 (UTC)
::Yes, if this happens it should be through a disable all images toggle, not an additional blur. There have been times that would have been very helpful for me. ] (]) 03:52, 15 December 2024 (UTC)
:'''Support''' the proposal as written. I'd imagine WMF can add a button below the already-existing accessibility options. People have different cultural, safety, age, and mental needs to block certain images. ] <i><sup style="display:inline-flex;rotate:7deg;">]</sup></i> 13:04, 15 December 2024 (UTC)
:I'd support an option to replace images with the alt text, as long as all you had to do to see a hidden image was a single click/tap (we'd need some fallback for when an image has no alt text, but that's a minor issue). Blurring images doesn't provide any significant bandwidth benefits and could in some circumstances cause problems (some blurred innocent images look very similar to some blurred blurred images that some people regard as problematic, e.g. human flesh and cooked chicken). I strongly oppose anything that requires submitting personal information of any sort in order to see images per NatGertler. ] (]) 14:15, 15 December 2024 (UTC)
::Fallback for alt text could be filename, which is generally at least slightly descriptive. -- ] (]) 14:45, 15 December 2024 (UTC)
* These ideas (particularly the toggle button to blur/hide all images) can be suggested at ''']'''. ] (]) 15:38, 15 December 2024 (UTC)


== Class icons in categories ==
] (]) 03:57, 27 January 2010 (UTC)
:Not quite what you're asking for, but there is ]. --] ] 06:01, 27 January 2010 (UTC)
:Reply To Cybercobra - While ] may eventually be fairly useful to me, it presumes that I have already done enough edits to have a statistical pattern of past contributions. I guess I would consider that to be a good intermediate tool, for those with an already established passion who were looking to expand their horizons. Also, it is not easily available to the new editor.
:Another general neo nitpick: I consider "users" to be people who come and look things up and read them - they "use" the encyclopedia. Editors are people who write or "edit" pages. I realize that the terms aren't used that way here, but it's screwy from a functional descriptive POV.
:--] (]) 21:49, 27 January 2010 (UTC)


This is something that has frequently occurred to me as a potentially useful feature when browsing categories, but I have never quite gotten around to actually proposing it until now.
This is a good idea, I think - to provide suggestions automatically to new users. It links with a suggestion I made to allow users to request random task suggestions: ]. It's perfectly doable, but someone has to do it... ] <sup>]</sup> 21:56, 27 January 2010 (UTC)
:Oh yes, in the mean time, there's ]. ] <sup>]</sup> 21:57, 27 January 2010 (UTC)
::] and {{tl|opentasks}} are more appropriate. ]<span style="background-color:white; color:grey;">&amp;</span>] 01:39, 28 January 2010 (UTC)


Basically, I'm thinking it could be very helpful to have ] class icons appear next to article entries in categories. This should be helpful not only to readers, to guide them to the more complete entries, but also to editors, to alert them to articles in the category that are in need of work. Thoughts? ] (]) 03:02, 7 December 2024 (UTC)
:I'm always partial to ]. ''']''' <sup>]</sup> 01:55, 28 January 2010 (UTC)
:If we go with this, I think there should be only 4 levels - Stub, Average (i.e. Start, C, or B), GA, & FA.
:There are significant differences between Start, C, and B, but there's no consistent effort to grade these articles correctly and consistently, so it might be better to lump them into one group. Especially if an article goes down in quality, almost nobody will bother to demote it from B to C. <span style="font-family:cursive">]]</span> 04:42, 8 December 2024 (UTC)
:: Isn't that more of an argument for consolidation of the existing levels rather than reducing their number for one particular application?
:: Other than that, I think I would have to agree that there are too many levels - the difference between Start and C class, for example, seems quite arbitrary, and I'm not sure of the usefulness of A class - but the lack of consistency within levels is certainly not confined to these lower levels, as GAs can vary enormously in quality and even FAs. But the project nonetheless finds the content assessment model to be useful, and I still think their usefulness would be enhanced by addition to categories (with, perhaps, an ability to opt in or out of the feature).
:: I might also add that including content assessment class icons to categories would be a good way to draw more attention to them and encourage users to update them when appropriate. ] (]) 14:56, 8 December 2024 (UTC)
:::I believe anything visible in reader-facing namespaces needs to be more definitively accurate than in editor-facing namespaces. So I'm fine having all these levels on talk pages, but not on category pages, unless they're applied more rigorously.
:::On the other hand, with FAs and GAs, although standards vary within a range, they do undergo a comprehensive, well-documented, and consistent process for promotion and demotion. So just like we have an icon at the top of those articles (and in the past, next to interwiki links), I could hear putting them in categories. <span style="font-family:cursive">]]</span> 18:25, 8 December 2024 (UTC)
:Isn't the display of links Category pages entirely dependent on the Mediawiki software? We don't even have ]s displayed, which would probably be considerably more useful.{{pb}}Any function that has to retrieve content from member articles (much less their talkpages) is likely to be somewhat computationally expensive. Someone with more technical knowledge may have better information. ] (]) 18:01, 8 December 2024 (UTC)
::Yes, this will definitely require MediaWiki development, but probably not so complex. And I wonder why this will be more computationally expensive than scanning articles for ] tags in the first place. <span style="font-family:cursive">]]</span> 18:27, 8 December 2024 (UTC)
:::{{tpq| And I wonder why this will be more computationally expensive than scanning articles for ] tags in the first place}} my understanding is that this is not what happens. When a category is added to or removed from an article, the software adds or removes that page as a record from a database, and that database is what is read when viewing the category page. ] (]) 20:14, 8 December 2024 (UTC)
:I think that in the short term, this could likely be implemented using a user script (displaying short descriptions would also be nice). Longer-term, if done via an extension, I suggest limiting the icons to GAs and FAs for readers without accounts, as other labels aren't currently accessible to them. (Whether this should change is a separate but useful discussion).<span id="Frostly:1733699202975:WikipediaFTTCLNVillage_pump_(proposals)" class="FTTCmt"> —&nbsp;] (]) 23:06, 8 December 2024 (UTC)</span>
:: I'd settle for a user script. Who wants to write it? :) ] (]) 23:57, 8 December 2024 (UTC)
::: As an FYI for whoever decides to write it, ] may be useful to you. ]] 01:04, 9 December 2024 (UTC)
::::@], the ] already exists. Go to ] and scroll about two-thirds of the way through that section.
::::I strongly believe that ordinary readers <ins>don't</ins> care about this kind of ], but if you want it for yourself, then use the gadget or fork its script. Changing this old gadget from "adding text and color" to "displaying an icon" should be relatively simple. ] (]) 23:43, 12 December 2024 (UTC)
*I strongly oppose loading any default javascript solution that would cause hundreds of client-side queries every time a category page is opened. As far as making an upstream software request, there are multiple competing page quality metrics and schemes that would need to be reviewed. — ] <sup>]</sup> 15:13, 18 December 2024 (UTC)


== Cleaning up NA-class categories ==
::You know what'd be easy for beginning editors? Reviewing pages created by non-native english speakers. I found a few on some obscure battles between Russia and Japan, or about Cuba that had many small errors throughout. Pages like that can greatly benefit from the ear of a native english speaker. ] (] | ]) 03:26, 29 January 2010 (UTC)
:::Now this sounds cool! It's low hanging fruit, and it doesn't require a lot of jargon to do. The drawback is what happens if the new editor is also not a native English speaker? --] (]) 04:00, 29 January 2010 (UTC)
:::: *shrug* Have other options? ] (] | ]) 04:02, 29 January 2010 (UTC)
:::::Yeah, give the person several choices. Hence, a list. --] (]) 08:03, 29 January 2010 (UTC)
::::::I think this is a good idea, but it does require tailoring to the individual. I believe that many editors come here because of a particular interest and putting them in touch with the most relevant project is a good start; Perhaps the list could have a tailored search box to make it easy to find projects that they are interested in? Other easy entry areas are ], and one I'd like to see started - a project to add pictures to articles where the English language article lacks pictures but there is an article in another language version that has a picture. For some new editors who come with a more academic but less technical background perhaps reviewing articles at ] would be a good start. For others installing ] and starting at ] might be possible - though I suspect that for most this would require quite a familiarity with our categorisation logic first. '']]<span style="color:DarkOrange">Chequers''</span> 13:58, 29 January 2010 (UTC)


We have a long-standing system of double classification of pages, by quality (stub, start, C, ...) and importance (top, high, ...). And then there are thousands of pages that don't need either of these; portals, redirects, categories, ... As a result most of these pages have a double or even triple categorization, e.g. ] is in ], ], and ].
:::::::The editor above me is onto something with the search function. I know for a fact that I hardly ever edit any articles other than those related to comedy and comedians as that is what I have an interest in. If I happen to be browsing and find something in another category that needs editing, I will of course do it, but I don't really actively seek editing outside of my field. For this reason, I think it would be ideal to add a function into which a new contributor could add one of their interests and be directed to the appropriate category page or project. ] (]) 12:25, 31 January 2010 (UTC)


My suggestion would be to put those pages only in the "Class" category (in this case ]), and only give that category a NA-rating. Doing this for all these subcats (File, Template, ...) would bring the at the moment 276,534 (!) pages in ] back to near-zero, only leaving the anomalies which probably need a different importance rating (and thus making it a useful cleanup category).
== Time to remove placeholders? ==


It is unclear why we have two systems (3 cat vs. 2 cat), the tags on ] (without class or NA indication) have a different effect than the tags on e.g. ], but my proposal is to make the behaviour the same, and in both cases to reduce it to the class category only (and make the classes themselve categorize as "NA importance"). This would only require an update in the templates/modules behind this, not on the pages directly, I think. ] (]) 15:15, 9 December 2024 (UTC)
Nearly 2 years ago it ] not to use place-holders (] and ]). CON was split over how to proceed as some wanted to wait till a replacement could be devised this has not happened. These should be removed from wiki post haste ] (]) 09:13, 27 January 2010 (UTC)


:Are there any pages that don't have the default? e.g. are there any portals or Category talk: pages rated something other than N/A importance? If not then I can't see any downsides to the proposal as written. If there are exceptions, then as long as the revised behaviour allows for the default to be overwritten when desired again it would seem beneficial. ] (]) 16:36, 9 December 2024 (UTC)
From:]
::As far as I know, there are no exceptions. And I believe that one can always override the default behaviour with a local parameter. {{ping|Tom.Reding}} I guess you know these things better and/or knows who to contact for this. ] (]) 16:41, 9 December 2024 (UTC)
{{Nutshell|title=Use of placeholders|Use of these placeholders in neither encouraged nor deprecated. Although many editors object to their appearance they work for the intended purpose.}}<br/>— ] (]) 17:45, 27 January 2010 (UTC)
::Looking a bit further, there do seem to be exceptions, but I wonder why we would e.g. have redirects which are of high importance to a project (]). Certainly when one considers that in some cases, the targets have a lower importance than the redirects? E.g. ]. ] (]) 16:46, 9 December 2024 (UTC)
The text on the images themselves says different
:::I was imagining high importance United States redirects to be things like ] but that isn't there and what is is a very motley collection. I only took a look at one, ]. As far as I can make out the article was originally at this title but later moved to ] over a redirect. Both titles had independent talk pages that were neither swapped nor combined, each being rated high importance when they were the talk page of the article. It seems like a worthwhile exercise for the project to determine whether any of those redirects are actually (still?) high priority but that's independent of this proposal. ] (]) 17:17, 9 December 2024 (UTC)
<p>There was significant opposition to the use of images such as ] and this one. ] with the question, "placeholder images should not be used at all on the main page of articles", however, only ] with any particular recommendation.</p>] (]) 23:07, 27 January 2010 (UTC)
:::{{clc|Custom importance masks of WikiProject banners}} is where to look for projects that might use an importance other than NA for cats, or other deviations. &nbsp;&nbsp;<b>~</b>&nbsp;<span style="font-family:Monotype Corsiva; font-size:16px;">] (] ⋅])</span>&nbsp; 17:54, 9 December 2024 (UTC)
:Most projects don't use this double intersection (as can be seen by the amount of categories in ], compared to ]). I personally feel that the bot updated page like ] is enough here and requires less category maintenance (creating, moving, updating, etc.) for a system that is underused. ] (]) 17:41, 9 December 2024 (UTC)
:Support this, even if there might be a few exceptions, it will make them easier to spot and deal with rather than having large unsorted NA-importance categories. ] (] · ]) 18:04, 9 December 2024 (UTC)
:Strongly agree with this. It's bizarre having two different systems, as well as a pain in the ass sometimes. Ideally we should adopt a single consistent categorization system for importance/quality. ] (]) 22:56, 16 December 2024 (UTC)


Okay, does anyone know what should be changed to implement this? I presume this comes from ], I'll inform the people there about this discussion. ] (]) 14:49, 13 December 2024 (UTC)
Well, if you want to fight that fight, go ahead. If you're right then you shouldn't have any real issues with ]... I don't really care one way or another, personally. Although, thinking about it, the placeholders are not only nice to have, but their "ugliness" actually serves a purpose in that it ought to prompt some people to upload (appropriately licensed!) images to take their place.<br/>— ] (]) 23:11, 27 January 2010 (UTC)
:I have a major issue with a FFD. As the image is on commons and as far as I know the only legitimate reason for deletion from common is copyright issues. ] (]) 23:24, 27 January 2010 (UTC)
::So what can we do here? Add them to the blocked images list? --] (]) 02:00, 28 January 2010 (UTC)
::Humm... I hadn't realized that they're on Commons. We should probably have a discussion about that in particular; with a wider focus though (not limited to only those images). You're correct of course that they shouldn't be deleted from Commons simply because we may not like their use here. I'm not really sure how to accurately express this point, but while these particular images, and similar ones, can live on Commons because they have a compatible license, their ''content'' is distinct in that they're... more functional? They're not really <u>content</u> on their own, in the sense that an image which could potentially become a "Featured image" is "real content". As long as that sort of distinction is clearly made, somehow, then I wouldn't have any real issue with a policy stating that those images should be avoided in most articles here in en.wikipedia. <small>(on the other hand, I'm somewhat hesitant to sanction the creation of yet another item of busywork for some editors to immerse themselves in...)</small><br/>— ] (]) 10:28, 28 January 2010 (UTC)
:::Also commons doesn't care about the likes of ]. If its free its ok ] (]) 11:24, 28 January 2010 (UTC)
::::Right, and they shouldn't either, which I touched on above. I think that my main point here is that, as far as I'm aware of, we don't have any formal policy on dealing with the use of Commons content here, and we probably should. We simply need to be cognizant of the widespread nature of such potential policy. We can't create policy stating that these specific images aren't allowed to be used any longer (well, we could of course, but that would be a mistake because it would be overly specific, and people ''will'' take that as a wider policy statement regardless of any intent).<br/>— ] (]) 11:32, 28 January 2010 (UTC)
:::::Didn't FFD use to mean File for Discussion? Can't we modify FFD and block unsuitable commons images, basically extend the current FFD process rather than create an entire policy for commons images ? ] (]) 11:38, 28 January 2010 (UTC)
::::::The end effect that a new policy would cause would naturally be a change in the FFD process, I'm sure. From an organizational point of view, I'm personally leery of "backdooring" new policy by changing the manner in which certain processes operate. The manner in which a potential policy would affect Misplaced Pages is clearly demonstrated by this very discussion, which demonstrates to me that we're talking about new policy here rather then some relatively minor process wonkery.<br/>— ] (]) 12:19, 28 January 2010 (UTC)
:::::::How do you suggest we proceed? ] (]) 13:23, 28 January 2010 (UTC)
::::::::Well, I just posted a note on the talk page at ]. Hopefully someone there will come and comment. You're free to create ] if you'd like, of course (with the appropriate {{tl|Proposed policy}} tag at the top, obviously).<br/>— ] (]) 14:13, 28 January 2010 (UTC)
:::::::::Is blocking of commons images possible for a technicial point of view? ] (]) 14:58, 28 January 2010 (UTC)


:So essentially what you are proposing is to delete ] and all its subcategories? I think it would be best to open a CfD for this, so that the full implications can be discussed and consensus assured. It is likely to have an effect on assessment tools, and tables such as ] would no longer add up to the expected number &mdash;&nbsp;Martin <small>(]&nbsp;·&nbsp;])</small> 22:13, 14 December 2024 (UTC)
::::::::::From a technical standpoint, we could upload a 1x1 transparent pixel under the same name, but I don't think the elimination of these images is as cut-and-dry as the original post states. The consensus seemed to be to deprecate the use but not to eliminate them where they exist. See also: ]. –<font face="verdana" color="black">]</font>] 15:03, 28 January 2010 (UTC)
:::::::::::Lets put the original suggestion on the back-burner for now and pretend we've a commons image that we've 100% agreement to remove but it isn't copyvio how do we deal with it ] (]) 16:04, 28 January 2010 (UTC) ::There was a CfD specifically for one, and the deletion of ] doesn't seem to have broken anything so far. A CfD for the deletion of 1700+ pages seems impractical, an RfC would be better probably. ] (]) 08:52, 16 December 2024 (UTC)
:::Well a CfD just got closed with 14,000 categories, so that is not a barrier. It is also the technically correct venue for such discussions. By the way, all of the quality/importance intersection categories check that the category exists before using it, so deleting them shouldn't break anything. &mdash;&nbsp;Martin <small>(]&nbsp;·&nbsp;])</small> 08:57, 16 December 2024 (UTC)
::::::::::::I don't follow. We could bot remove it or as I said put in a single transparent pixel on the Misplaced Pages page with the same name as the commons image. –<font face="verdana" color="black">]</font>] 18:38, 28 January 2010 (UTC)
::::And were all these cats tagged, or how was this handled? ] (]) 10:21, 16 December 2024 (UTC)
:::::]. HouseBlaster took care of listing each separate cateory on the working page. &mdash;&nbsp;Martin <small>(]&nbsp;·&nbsp;])</small> 10:43, 16 December 2024 (UTC)
::::::I have no idea what the "working page" is though. ] (]) 11:02, 16 December 2024 (UTC)


I'm going to have to '''oppose''' any more changes to class categories. Already changes are causing chaos across the system with the bots unable to process renamings and fixing redirects whilst ] is being overwhelmed by the side effects. Quite simply we must have no more changes that cannot be properly processed. Any proposal must have clear instructions posted before it is initiated, not some vague promise to fix a module later on. ] (]) 13:16, 16 December 2024 (UTC)
:←I'd say that it should simply be removed from any use in the article namespace (other namespaces shouldn't be a concern at all, here). I don't think that we should pick out specific images/files which shouldn't be used, although that's one possible approach, but we ought to develop a category with clear inclusion criteria. Adding a hidden maintenance cat to the File namespace page which holds the image would facilitate tracking. I think that the English Misplaced Pages page for a commons file can hold our own categories for the file, but if not I'm fairly certain that we could coordinate with Commons in order to create an appropriate category there.<br/>— ] (]) 16:54, 28 January 2010 (UTC)
:Then I'm at an impasse. Module people tell me "start a CfD", you tell me "no CfD, first make changes at the module". No one wants the NA categories for these groups. What we can do is 1. RfC to formalize that they are unwanted, 2. Change module so they no longer get populated 3. Delete the empty cats caused by steps 1 and 2. Is that a workable plan for everybody? ] (]) 13:39, 16 December 2024 (UTC)
:: Any category system would be open to massive abuse,unless the page was fully protected after and bot knew only to remove files from the category and the page was fully protected ] (]) 18:34, 28 January 2010 (UTC)
::I don't think @] was telling you to make the changes at the module first, rather to prepare the changes in advance so that the changes can be implemented as soon as the CfD reaches consensus. For example this might be achieved by having a detailed list of all the changes prepared and published in a format that can be fed to a bot. For a change of this volume though I do think a discussion as well advertised as an RFC is preferable to a CfD though. ] (]) 14:43, 16 December 2024 (UTC)
:::humm... this seems to be coming out of left field, can you explain better? Correct me if I'm mistaken, but it sounds like you're saying that if people were to disagree with you adding the category to an image and they removed it, you feel that it's your right to judge them as abusing the system somehow. Protection doesn't exist to resolve content issues, after all.<br/>— ] (]) 19:12, 28 January 2010 (UTC)
::::No I'm not saying I'd judge them . I though you where suggesting that after a FFD like discussion ,we'd place a ] on the wiki image page. We would a bot which would maintain to ensure the image wasn't used after the ''FFB (File for Barring'') discussion had been completed. Now for the system above to work we'd have to page protect the image page ] (]) 22:13, 28 January 2010 (UTC) :::Got it in one. There are just too many problems at the moment because the modules are not being properly amended in time. We need to be firmer in requiring proponents to identify the how to change before the proposal goes live so others can enact it if necessary, not close the discussion, slap the category on the working page and let a mess pile up whilst no changes to the module are implemented. ] (]) 19:37, 16 December 2024 (UTC)
::::Oh, I got it as well, but at the module talk page, I was told to first have a CfD (to determine consensus first I suppose, instead of writing the code without knowing if it will be implemented). As I probably lack the knowledge to make the correct module changes, I'm at an impasse. That's why I suggested an RfC instead of a CfD to determine the consensus for "deletion after the module has been changed", instead of a CfD which is more of the "delete it now" variety. No one here has really objected to the deletion per se, but I guess that a more formal discussion might be welcome. ] (]) 10:09, 17 December 2024 (UTC)
::Yes, this is exactly right - if there's solid consensus to remove an image but the image itself isn't intrinsically bad, then just go around and remove all instances of it in article space. Deleting files (or templates, etc) ''in order'' to remove editorially disputed content is a bit inefficient, and not really what FFD is designed for. ] | ] | 19:07, 28 January 2010 (UTC)
*'''Oppose''' on the grounds that I think the way we do it currently is fine. ] (]) 05:33, 18 December 2024 (UTC)
:::Removing the back links is fine until the file starts to pop up again and again and again. Why have ] and other policies and guidelines if we can't remove ''original images'' which by there very nature would be copyright free ] (]) 22:13, 28 January 2010 (UTC)
**What's the benefit of having two or three categories for the same group of pages? We have multiple systems (with two or three cats, and apparently other ones as well), with no apparent reason to keep this around. As an example, we have ] with more than 50,000 pages, e.g. ] apparently. But when I go to that page, it isn't listed in that category, it is supposedly listed in ] (which seems to be a nonsense category, we shouldn't have NA-class, only NA-importance). but that category doesn't contain that page. So now I have no idea what's going on or what any of this is trying to achieve. ] (]) 08:30, 18 December 2024 (UTC)
**:Something changed recently. I think. But it is useful to know which NA pages are tagged with a project with a granularity beyond just "Not Article". It helps me do maintenance and find things that are tagged improperly, especially with categories. I do not care what happens to the importance ratings. ] (]) 09:20, 18 December 2024 (UTC)


== Category:Current sports events ==
===Tangential issue: election templates===


I would like to propose that sports articles should be left in the ] for 48 hours after these events have finished. I'm sure many Misplaced Pages sports fans (including me) open CAT:CSE first and then click on a sporting event in that list. And we would like to do so in the coming days after the event ends to see the final standings and results.
A master summary template (which I don't particularly like) has been created to take up a huge piece of real estate on every page that covers a specific political election (e.g. ] or ]). On many of these templates, there are one or more rather unattractive "No free image: do you know of one?" placeholders (see for example, ]). I didn't follow the original discussions referred to above, so I'm not completely clear on the issues involved, but would they have any bearing on placeholders for pictures that may not even yet be in Wikimedia Commons, as opposed to pictures that have been removed for copyright or usage reasons? ] (]) 07:13, 30 January 2010 (UTC)


Currently, this category is being removed from articles too early, sometimes even before the event ends. Just like yesterday. ], what do you say about that?
:I assume that you're talking about {{tl|Infobox Election}}? This started by essentially advocating for the removal of all instances where ] or ] are being used here on en.wikipedia. However, since those images are located on Commons, this discussion has morphed into a discussion on handling the use of images from Commons in general, which is a wider issue which I think is worth discussing regardless. So, in the case of ], those placeholders would likely be removed from the infobox.<br/>—&nbsp;] (]&thinsp;&bull;&thinsp;]) 12:33, 30 January 2010 (UTC)


So I would like to ask you to consider my proposal. Or, if you have a better suggestion, please comment. Thanks, ] (]) 16:25, 9 December 2024 (UTC)
== Requested articles template pages for watching individual subjects ==


:Thank you for bringing up this point. I agree that leaving articles in the Category:Current sports events for a short grace period after the event concludes—such as 48 hours—would benefit readers who want to catch up on the final standings and outcomes. ] (]) 18:19, 9 December 2024 (UTC)
Would it make sense to convert the lowest level sections of the ] tree into included templates (much as the way the Peer Review section is now set up). That way those of us with a particular interest in certain subjects can just keep a watch on just those templates, rather than having to watch the entire page? Thanks.&mdash;] (]) 20:27, 28 January 2010 (UTC)
: Sounds reasonable on its face. ] (]) 23:24, 9 December 2024 (UTC)
:How would this be policed though? Usually that category is populated by the {{tl|current sport event}} template, which every user is going to want to remove immediately after it finishes. '''] <sup>(] • ])</sup>''' 19:51, 11 December 2024 (UTC)


::{{ping|Lee Vilenski}} First of all, the ] has nothing to do with the ]; articles are added to that category in the usual way.
== Creation of a new category of established editors called '''RS-Reviewers''' ==


::You ask how it would be policed. Simply, we will teach editors to do it that way – to leave an article in that category for another 48 hours. AnishaShar have already expressed their opinion above. ] is also known for removing 'CAT:CSE's from articles. I think we could put some kind of notice in that category so other editors can notice it. We could set up a vote here. Maybe someone else will have a better idea. ] (]) 20:25, 14 December 2024 (UTC)
'''The proposal:''' Allow trusted and established users who have a keen understanding of what are and what are not ] (their past involvement would be evidence) an additional user right (akin to 'autoreviewer', or 'rollbacker'...) called '''rs-reviewer'''. RS-Reviewers would be responsible to involve themselves in discussions on issues raised on the ]. They would have the additional responsibility to ensure that RS issues on the noticeboard, in general, are resolved within a week, and at the maximum within a fortnight. They would also have the additional power to enforce the changes that are so discussed, on the specific article in question.
:Would it not be more suitable for a "recently completed sports event" category. It's pretty inaccurate to say it's current when the event finished over a day ago. '''] <sup>(] • ])</sup>''' 21:03, 14 December 2024 (UTC)


Okay Lee, that's also a good idea. We have these two sports event categories:
'''The genesis of the proposal:''' Currently, the reliable sources noticeboard - presumably the most important forum for discussing RS issues - sees issues being raised and previously involved editors debating in the same manner as they would have done on the specific article's page. Many a time, due to an overload of discussions from involved editors, independent commentators - who would have left their comments initially - veer off the discussions. As a result, discussions do not reach a conclusive end. And in some cases, discussions just keep languishing on the noticeboard with extensive commentary. '''RS-Reviewers''' would work towards ensuring discussions are undertaken in a concise manner and would be able to mediate the discussions towards the appropriate conclusion within a given time frame.
* ]
* ]
* ] can be a suitable addition to those two. ], you are also interested in categories and sporting events; what is your opinion? ] (]) 18:14, 16 December 2024 (UTC)


::I don't have any objection to a Recent sports events category being added, but personally, if I want to see results of recent sports events, I would be more likely to go to ], which should include all recent events. ] (]) 23:30, 16 December 2024 (UTC)
'''Benefits:'''
*The moment editors to an article - who would have brought an issue to the reliable sources noticeboard - note that there is an RS-Reviewer amongst them, their discussions would (in general) be more specific, logical and rationally civic.
*If the RS-Reviewer sees that discussions are not being allowed to reach a conclusive end - due to (perhaps) tendentious discussions - he/she would be able to report the situation to an administrator who, knowing that the report has been raised by an RS-Reviewer, would be in a better position to understand the stance to take.
*Administrators would be subsequently able to give more time for administrative tasks (similar to what happened after introducing the 'rollbacker' system).
*Edit warring on specific articles would also reduce, due to such a formal mediation by designated RS-Reviewers, on the noticeboard, and due to another reason given right below.
*Over time, RS-Reviewers will also involve themselves on talk pages of specific articles as neutral mediating entities working towards a consensus solution.
*Additionally, it would allow established users more involvement with the project (again, similar to what having the 'rollbacker' or 'autoreviewer' status gives) and further trust within the community.


:::Did this get the go-ahead then? I see a comment has been added to the category, and my most recent edit was reverted when I removed the category after an event finished. I didn't see any further discussion after my last comment. ] (]) 09:37, 25 December 2024 (UTC)
'''How would RS-Reviewers be selected'''
*A centralised forum (similar to 'rollback' granting) would be set up, where established users would have to show administrators at least three instances of past involvement on the reliable sources noticeboard forum or on specific article's talk forums, where their comments worked towards consensus with respect to issues related to reliable sources. Once an '''RS-Reviewer''' power is granted, a tag would appear alongside the username in the link on user rights. RS-Reviewers thus selected would also be allowed to upload a standard template that announces they have RS-Reviewer rights.


== User-generated conflict maps ==
'''Past similar perennial proposals:'''
] talked about creating different kinds of administrators. Although the suggestion here is not that, it might be seen as some to be that, therefore have listed the link. ] ] 08:02, 29 January 2010 (UTC)
:It sounds to me like basically you're saying, give certain people a "hey, this person is officially sanctioned as knowledgeable about RSes, so listen to them!" indication, which really rubs me the wrong way. Seems pretty against WP spirit to me. It also doesn't seem to grant any actual powers. ] (]) 15:11, 29 January 2010 (UTC)
:Solution in search of a problem. If discussion is veering off-topic then mention it and try to get discussion back on topic, you don't need a special title to do this. If dispute resolution is needed, we have a ] process already in place. <b style="color:#c22">^</b>]</sup>]]&nbsp;<em style="font-size:10px;">18:13, 29 January 2010 (UTC)</em>
:Don't see the point. We have admins as they use tools, not to make them special wise rulers. Why do we need to say that certain editors know everything on RSs? ]<span style="background-color:white; color:grey;">&amp;</span>] 20:36, 29 January 2010 (UTC)
:So basically its a new user right with no actual technical tools or abilities attached to it, but with the power to essentially override a consensus and enforce their own view. And the proposal is to give this out for making 3 useful comments on a board that gets that many new threads every day? I'm not quite sure which part of the proposal I like least. <span style="font-family:Broadway">]]</span> 22:42, 29 January 2010 (UTC)
:This proposal strikes me as well-intentioned but misguided; it is contrary to the core values of Misplaced Pages to endow any user with "more say" than the next. ]<b><font color="#6060BF">]</font></b> 22:52, 29 January 2010 (UTC)
:It strikes me as, so to say, ''unconstitutional'', in the sense: inconsistent with our core principles. Like the 'established editors' and so on. The lack of uninvolved editors to help in dispute resolution is indeed a real and persistent problem, but this is not a way to solve it. ] (]) 23:11, 29 January 2010 (UTC)


In a number of articles we have (or had) user-generated conflict maps. I think the mains ones at the moment are ] and ]. The war in Afghanistan had one until it was removed as poorly-sourced in early 2021. As you can see from a brief review of ] the map has become quite controversial there too.
:This reminds of point two in the ] essay.. regarding ''"You're not smarter than everyone else"''.. Good reading.. the entire essay is.. -- ]] 23:40, 29 January 2010 (UTC)
:I guess the issue is not about giving another group of established editors any additional tools but with respect to giving them powers to (in?)formally mediate into the reliable sources noticeboard as many a time, discussions continue to be as obfuscated on the RS noticeboard as they generally are on the article's talk page. There is, however, a third-party opinion forum available that editors can use currently. This third-party opinion, I expect, is more or less used for the same purposes that I am mentioning. So is the reliable sources noticeboard. However, I have no issues with not giving the additional tag of an RS-Reviewer to the editor, in case it is seen as not adding to the solution. ] ] 03:25, 30 January 2010 (UTC)
:Unnecessary community division; solution in search of a problem. We're not Citizendium and have never had a problem with not having official Experts, I don't see why we should start now. ''Vive ], ], ]''! --] ] 06:03, 31 January 2010 (UTC)


My personal position is that sourcing conflict maps entirely from reports of occupation by one side or another of individual towns at various times, typically from Twitter accounts of dubious reliability, to produce a map of the current situation in an entire country (which is the process described ]), is a ]/]. I also don't see liveuamap.com as necessarily being a highly reliable source either since it basically is an ]/Wiki-style user-generated source, and ]. I can understand it if a reliable source produces a map that we can use, but that isn't what's happening here.
== Suggestion for movie articles re:Rotten Tomatoes. ==


Part of the reason this flies under the radar on Misplaced Pages is it ultimately isn't information hosted on EN WP but instead on Commons, where reliable sourcing etc. is not a requirement. However, it is being used on Misplaced Pages to present information to users and therefore should fall within our PAGs.
Quite a few if not most articles use ] to show readers how well (or badly) a movie has done. The thing of it is, movie ratings often fluctuate so quite often someone has to edit the ratings (yesterday it was 25% now its 27%). My suggestion is this: instead of constantly editing the percentage number, why not link directly to the Rotten Tomatoes page for the movie and just use a term like 'poorly rated' or 'moderatly well rated' As an example: ]:


I think these maps should be deprecated unless they can be shown to be sourced entirely to a reliable source, and not assembled out of individual reports including unreliable ] sources. ] (]) 16:57, 11 December 2024 (UTC)
Reception


:A lot of the maps seem like they run into SYNTH issues because if they're based on single sources they're likely running into copyright issue as derivative works. I would agree though that if an image does not have clear sourcing it shouldn't be used as running into primary/synth issues. ] <sup><small>]</small></sup> 17:09, 11 December 2024 (UTC)
The film was a considerable success at the box office and became highly profitable on a budget of about £5 million ($9.8 million). In the UK, it took in £6.1 million ($11.89 million), while in the U.S. it became a surprise hit, taking over $45 million despite a limited release at fewer than 1,500 screens across the country. The film garnered around $82.7 million worldwide.
::Though simple information isn't copyrightable, if it's sufficiently visually similar I suppose that might constitute a copyvio. ''']]''' 02:32, 13 December 2024 (UTC)
Critical views of the film were very positive; the review site Rotten Tomatoes rates it 88%. On Metacritic it received a 73 (out of 100) based on 39 reviews.
:I agree these violate OR and at least the spirit of NOTNEWS and should be deprecated. I remember during the Wagner rebellion we had to fix one that incorrectly depicted Wagner as controlling a swath of Russia. ] (]) 05:47, 13 December 2024 (UTC)
]
* The ] ''(right)'' seems quite respectable being based on the work of the ] and having lots of thoughtful process and rules for updates. It is used on many pages and in many Wikipedias. There is therefore a considerable consensus for its use. ]🐉(]) 11:33, 18 December 2024 (UTC)
:'''Oppose''': First off, I'd like to state my bias as a bit of a map geek. I've followed the conflict maps closely for years.
:I think the premise of this question is flawed. ''Some'' maps may be poorly sourced, but that doesn't mean all of them are. The updates to the Syrian, Ukraine, and Burma conflicts maps are sourced to third parties. So that resolves the OR issue.
:The sources largely agree with each other, which makes SYNTH irrelevant. Occasionally one source may be ahead of another by a few hours (e.g., LiveUaMap vs. ISW), but they're almost entirely in lock step.
:I think this proposal throws out the baby with the bathwater. One bad map doesn't mean we stop using maps; it means we stop using ''bad'' maps.
:You may not like the fact that these sources sometimes use OSI (open-source intelligence). Unfortunately, that is the nature of conflict in a zone where the press isn't allowed. Any information you get from the AP or the US government is likely to rely on the same sources.
:Do they make mistakes? Probably; but so do ''all'' historical sources. And these maps have the advantage that the Commons community continuously reviews changes made by other users. Much in the same way that Misplaced Pages is often more accurate than historical encyclopedias, I believe crowdsourcing may make these maps more accurate than historical ones.
:I think deprecating these maps would leave the reader at a loss (pictures speak a 1,000 words and all that). Does it get a border crossing wrong here or there? Yes, but the knowledge is largely correct.
:It would be an absolute shame to lose access to this knowledge. ] (]<small> • </small>]) 22:59, 19 December 2024 (UTC)
::@] ] is frowned upon as an argument for good reason. Beyond that: 1) the fact that these are based on fragmentary data is strangely not mentioned at all (] says 'Military situation as of December 18, 2024 at 2:00pm ET' which suggests that it's quite authoritative and should be trusted; the fact that it's based off the ISW is not disclosed.) 2) I'm not seeing where all the information is coming from the ISW. The ISW's map only covers territory, stuff like bridges, dams, "strategic hills" and the like are not present on the ISW map. Where is that info coming from? ] <sup><small>]</small></sup> 23:10, 19 December 2024 (UTC)
:::The Commons Syria map uses both the ISW and Liveuamap. The two are largely in agreement, with Liveuamap being more precise but using less reliable sources. If you have an issue with using Liveuamap as a source, fine, bring it up on the talk pages where it's used, or on the Commons talk page itself. But banning any ''any'' map of a conflict is throwing out the baby with the bathwater. The Ukraine map is largely based on ISW-verifiable information.
:::With regards to actual locations like bridges, I'm against banning Commons users from augmenting maps with easily verifiable landmarks. That definition of SYN is broad to the point of meaningless, as it would apply to any user-generated content that uses more than one source. ] (]<small> • </small>]) 23:50, 20 December 2024 (UTC)
:'''Weak Oppose''' I've been updating the Ukraine map since May 2022, so I hope my input is helpful. While I agree that some of the sources currently being used to update these maps may be dubious in nature, that has not always been the case. In the past, particularly for the Syria map, these maps have been considered among the most accurate online due to their quality sourcing. It used to be that a source was required for each town if it was to be displayed on these maps, but more recently, people have just accepted taking sources like LivaUAMap and the ISW and copying them exactly. Personally, I think we should keep the maps but change how they are sourced. I think that going back to the old system of requiring a reliable source for each town would clear up most of the issues that you are referring to, though it would probably mean that the maps would be less detailed than they currently are now. <span style="font-family:Copperplate Gothic, Ebrima;background-color:OrangeRed;border-radius:7px;text-shadow:2px 2px 4px#000000;padding:3px 3px;">]</span><sup>]</sup> 07:23, 21 December 2024 (UTC)
*'''Oppose''' The campaign maps are one of our absolute best features. The Syrian campaign map in particular was very accurate for much of the war. Having a high quality SVG of an entire country like that is awesome, and there really isn't anything else like it out there, which is why it provides such value to our readers. I think we have to recognize of our course that they're not 100% accurate, due to the fog of war. I wouldn't mind if we created subpages about the maps? Like, with a list of sources and their dates, designed to be reader facing, so that our readers could verify the control of specific towns for themselves. But getting rid of the maps altogether is throwing out the baby with the bathwater. ] <sup>]</sup>] 23:33, 22 December 2024 (UTC)


== Google Maps: Maps, Places and Routes ==
Doing it my way it would look like this:
----
Reception


]
The film was a considerable success at the box office and became highly profitable on a budget of about £5 million ($9.8 million). In the UK, it took in £6.1 million ($11.89 million), while in the U.S. it became a surprise hit, taking over $45 million despite a limited release at fewer than 1,500 screens across the country. The film garnered around $82.7 million worldwide.
Critical views of the film were very positive; the review site Rotten Tomatoes rates it . On Metacritic it also recieved .
----
...or something like that. Then we wouldn't have to constantly 'fix' the numbers, in effect they'd be self-repairing. ]] 01:21, 30 January 2010 (UTC)


Google Maps have the following categories: Maps, Places and Routes
: We don't include inline links to external sites in article text. The way to solve this problem is for movie editors to use "as of" dates when writing the text; the difference between 25% and 27% isn't worth changing. ] (]) 01:43, 30 January 2010 (UTC)
:Absolutely not. Inline external links should be avoided at all costs regardless, but especially in this sort of usage pattern. This makes Misplaced Pages inappropriately reliant on external web sites, of which we have absolutely no control. I don't knock the underlying sentiment, but this is not the way to go.<br/>—&nbsp;] (]&thinsp;&bull;&thinsp;]) 01:42, 30 January 2010 (UTC)


for example:
: Fluctuating success rates are an issue across many article types. A more pressing need for something like this would be stating how profitable a business is, for example, as that's a more frequent and wide-ranging fluctuation; and we don't throw up our hands in that case and say the information changes too frequently for us to keep up. Directing readers off to an external site for frequently-changing information seems like the beginning of turning Misplaced Pages into a link farm, rather than an information source in itself. <font face="Century Gothic">] <small>]</small> 06:38, 30 Jan 2010 (UTC)</font>


most significant locations have a www.google.com/maps/place/___ URL
: I'm in consensus with what others have said here. I feel that using "as of" is an appropriate enough indication to either prompt someone to update it or to inform the reader to check for themselves whether this has changed. Misplaced Pages is an encyclopaedia, not simply a portal by which to find information from elsewhere. ] (]) 12:22, 31 January 2010 (UTC)


these should be acknowledged and used somehow, perhaps
== Change of format for ] ==


] (]) 00:22, 12 December 2024 (UTC)
As it currently stands, it is unclear as to where new additions to ] should be placed. At the bottom of the page the EDIT link gives you? Or above the next level 2 heading. I propose creating a standard template similar to that used on the page used for reporting vandals for admin attention, which will be something like the following:
'''{{{1}}}'''
*Page it will be used on: ]
*Reason for request: {{{3}}} <nowiki>~~~~</nowiki>
This will result in a format something like {{quote|'''Address to whitelist'''
*Page it will be used on ]
*Reason for request: <reason here> ]<sup>(])</sup><sub>(])</sub> 21:18, 30 January 2010 (UTC)}}
It will create a standardized, easy to read format that will still allow admins to comment on it, by using second level bullets, etc, and will also avoid the problem of the additional level 3 headings. The
actual talk page will need examples of usage, but this should be trivial, as it can be based on the previously mentioned vandalism reporting page. I'd go ahead and start implementing this myself, but I'm not sure how appropriate that is on a page intended mainly for admin use.
]<sup>(])</sup><sub>(])</sub> 21:18, 30 January 2010 (UTC)
:I've created an example of what I mean at ], and a testing page at ]. Other testers and comments are appreciated. ]<sup>(])</sup><sub>(])</sub> 21:34, 30 January 2010 (UTC)


:What is the proposal here? If its for the google maps article, that would be more suitable for the talk page. As I see it, your proposal is simply saying that google maps has an api and we should use it for... something. I could be missing something, though ] (]) 08:20, 17 December 2024 (UTC)
:Most of the requests we get on the whitelist are from newbies and spammers who quickly screw up the page due to lack of knowledge on wiki markup. I guess this is a good idea as it may reduce the amount of screwing up that occurs (and hopefully reduce the number of bad faith requests as it forces the spammers to come up with a reason why we need the link). ] 02:37, 31 January 2010 (UTC)
::As I understand it, the IP is proposing embeds of google maps, which would be nice from a functionality standpoint (the embedded map is kinda-rather buggy), but given Google is an advertising company, isn't great from a privacy standpoint. ''']]''' 16:25, 17 December 2024 (UTC)
:::I think they're proposing the use of external links rather than embedding. ] (]) 18:16, 17 December 2024 (UTC)


== Allowing page movers to enable two-factor authentication ==
::Posted on the whitelist talk and ]. ] 11:05, 31 January 2010 (UTC)


I would like to propose that members of the ] user group be granted the <code>oathauth-enable</code> permission. This would allow them to use ] to enable ] on their accounts.
:I like this idea a lot. I often scan through the whitelist requests and it is often quite fiddly to see what the actual reason for whitelisting the page is. This uniform system would make it so much easier for the admins and for the people requesting. ] (]) 12:19, 31 January 2010 (UTC)


=== Rationale (2FA for page movers) ===
== Image Resize Bot ==
The page mover guideline already obligates people in that group to ], and failing to follow proper account security processes is grounds for ] of the right. This is because the group allows its members to (a) move pages along with up to 100 subpages, (b) override the title blacklist, and (c) have an increased rate limit for moving pages. In the hands of a vandal, these permissions could allow significant damage to be done very quickly, which is likely to be difficult to reverse.


Additionally, there is precedent for granting 2FA access to users with rights that could be extremely dangerous in the event of account compromise, for instance, ], ], and ] have the ability to enable this access, as do most administrator-level permissions (sysop, checkuser, oversight, bureaucrat, steward, interface admin).
Hey Guys,


=== Discussion (2FA for page movers) ===
I wrote a bot to resize images in ]. There has been some controversy on how the bot should opperate. Right now the bot works as such:
* '''Support''' as proposer. ]<sub>]<sub>]</sub></sub> (]/]) 20:29, 12 December 2024 (UTC)
* '''Support''' (but if you really want 2FA you can just request permission to enable it on Meta) ] ] 20:41, 12 December 2024 (UTC)
*:For the record, I do have 2FA enabled. ]<sub>]<sub>]</sub></sub> (]/]) 21:47, 12 December 2024 (UTC)
*::Oops, that says you are member of "Two-factor authentication testers" (testers = good luck with that). ] (]) 23:52, 14 December 2024 (UTC)
*::: A group name which is IMO seriously misleading - 2FA is not being tested, it's being actively used to protect accounts. ] ] 23:53, 14 December 2024 (UTC)
*::::] still says "currently in production testing with administrators (and users with admin-like permissions like interface editors), bureaucrats, checkusers, oversighters, stewards, edit filter managers and the OATH-testers global group." ] ] 09:42, 15 December 2024 (UTC)
*'''Support''' as a pagemover myself, given the potential risks and need for increased security. I haven't requested it yet as I wasn't sure I qualified and didn't want to bother the stewards, but having <code><nowiki>oathauth-enable</nowiki></code> by default would make the process a lot more practical. ] (] · ]) 22:30, 12 December 2024 (UTC)
*: Anyone is qualified - the filter for stewards granting 2FA is just "do you know what you're doing". ] ] 22:46, 12 December 2024 (UTC)
*'''Question''' When's the last time a page mover has had their account compromised and used for pagemove vandalisn? Edit 14:35 UTC: I'm not doubting the nom, rather I'm curious and can't think of a better way to phrase things. ''']]''' 02:30, 13 December 2024 (UTC)
*Why isn't everybody allowed to enable 2FA? I've never heard of any other website where users have to go request someone's (pro forma, rubber-stamp) permission if they want to use 2FA. And is it accurate that 2FA, after eight years, is still ]? I guess my overall first impression didn't inspire me with confidence in the reliability and maintenance. ] (]) 06:34, 14 December 2024 (UTC)
** Because the recovery process if you lose access to your device and recovery codes is still "contact WMF Trust and Safety", which doesn't scale. See also ]. ]] 15:34, 14 December 2024 (UTC)
**:We should probably consult with WMF T&S before we create more work for them on what they might view as very low-risk accounts. Courtesy ping @]. –] <small>(])</small> 16:55, 14 December 2024 (UTC)
**:No update comment since 2020 doesn't fill me with hope. I like 2FA, but it needs to be developed into a usable solution for all. '''] <sup>(] • ])</sup>''' 00:09, 15 December 2024 (UTC)
**::I ain't a technical person, but could a less secure version of 2fa be introduced, where an email is sent for any login on new devices? ''']]''' 01:13, 15 December 2024 (UTC)
**:::Definitely. However email addresses also get detached from people, so that would require that people regularly reconfirm their contact information. —] (] • ]) 11:01, 18 December 2024 (UTC)
*:For TOTP (the 6-digit codes), it's not quite as bad as when it was written, as the implementation has been fixed over time. I haven't heard nearly as many instances of backup scratch codes not working these days compared to when it was new. The WebAuthn (physical security keys, Windows Hello, Apple Face ID, etc) implementation works fine on private wikis but I wouldn't recommend using it for CentralAuth, especially with the upcoming SUL3 migration. There's some hope it'll work better afterward, but will still require some development effort. As far as I'm aware, WMF is not currently planning to work on the 2FA implmentation.{{pb}} As far as risk for page mover accounts goes, they're at a moderate risk. Page move vandalism, while annoying to revert, is reversible and is usually pretty loud (actions of compromised accounts can be detected and stopped easily). The increased ratelimit is the largest concern, but compared to something like account creator (which has noratelimit) it's not too bad. I'm more concerned about new page reviewer. There probably isn't a ton of harm to enabling 2FA for these groups, but there isn't a particularly compelling need either. ] (]) 12:47, 19 December 2024 (UTC)
*'''Support''' per nom. PMV is a high-trust role (suppressredirect is&nbsp;the ability to make a blue link turn red), and thus this makes sense. As a side note, I have changed this to bulleted discussion; # is used when we have separate sections for support and oppose. <b>]]</b>&nbsp;(]&nbsp;•&nbsp;he/they) 07:19, 14 December 2024 (UTC)
*'''Oppose''' As a pagemover myself, I find pagemover is an ''extremely'' useful and do not wish to lose it. It is nowhere near the same class as template editor. You can already ask the stewards for 2FA although I would recommend creating a separate account for the purpose. After all these years, 2FA remains experimental, buggy and cumbersome. Incompatible with the Microsoft Authenticator app on my iphone. ] ] 23:59, 14 December 2024 (UTC)
*:The proposal (as I read it) isn't "you must have 2FA", rather "you have the option to add it". '''] <sup>(] • ])</sup>''' 00:06, 15 December 2024 (UTC)
*::@], ] is correct. This would merely provide page movers with the option to enable it. ]<sub>]<sub>]</sub></sub> (]/]) 00:28, 15 December 2024 (UTC)
*:::Understood, but I do not want it associated with an administrator-level permission, which would mean I am not permitted to use it, as I am not an admin. ] ] 09:44, 15 December 2024 (UTC)
*::::It's not really that. It would be an opt-in to allow users (in the group) to put 2FA on their account - at their own digression.
*::::The main reasons why 2FA is currently out to admins and the like is because they are more likely to be targeted for compromising and are also more experienced. The 2FA flag doesn't require any admin skills/tools and is only incedentally linked. '''] <sup>(] • ])</sup>''' 12:58, 15 December 2024 (UTC)
*:::::Wait, so why is 2FA not an option for everyone already? ] (]) 01:15, 18 December 2024 (UTC)
*::::::@] the MediaWiki's 2FA implementation is complex, and the WMF's processes to support people who get locked out of their account aren't able to handle a large volume of requests (developers can let those who can prove they are the owner of the account back in). My understanding is that the current processes cannot be efficiently scaled up either, as it requires 1:1 attention from a developer, so unless and until new processes have been designed, tested and implemented 2FA is intended to be restricted to those who understand how to use it correctly and understand the risks of getting locked out. ] (]) 09:36, 18 December 2024 (UTC)
*It probably won't make a huge difference because those who really desire 2FA can already ], and because no page mover will be required to do so. However, there will be page movers who wouldn't request a global permission for 2FA yet would enable it in their preferences if it was a simple option. And these page movers might benefit from 2FA even more than those who already care very strongly about the security of their account. ] (]) 03:18, 15 December 2024 (UTC)
*'''Support''' and I can't think of any argument against something not only opt-in but already able to be opted into. ] (]) 08:09, 15 December 2024 (UTC)
*'''Oppose''' this is a low value permission, not needed. If an individual PMV really wants to opt-in, they can already do so over at meta - no need to build custom configuration for this locally. — ] <sup>]</sup> 15:06, 18 December 2024 (UTC)
*'''Support'''; IMO all users should have the option to add 2FA. ] (]) 10:26, 19 December 2024 (UTC)
*'''Support''' All users should be able to opt in to 2FA. Lack of a scalable workflow for users locked out of their accounts is going to be addressed by WMF only if enough people are using 2FA (and getting locked out?) to warrant its inclusion in the product roadmap. – ] (]) 14:01, 19 December 2024 (UTC)
*:That (and to @] above) sounds like an argument to do just that - get support put in place and enable this globally, not to piecemeal it in tiny batches for discretionary groups on a single project (this custom configuration would support about 3/10ths of one percent of our active editors). To the point of this RFC, why do you think adding this for this '''specific''' tiny group is a good idea? — ] <sup>]</sup> 15:40, 19 December 2024 (UTC)
*::FWIW, I tried to turn this on for anyone on meta-wiki, and the RFC failed (]). — ] <sup>]</sup> 21:21, 19 December 2024 (UTC)
*:::Exactly. Rolling it out in small batches helps build the case for a bigger rollout in the future. – ] (]) 05:24, 20 December 2024 (UTC)
*:I'm pretty sure that 2FA is already available to anyone. You just have to want it enough to either request it "for testing purposes" or to go to testwiki and request that you made an admin there, which will automatically give you access. See ]. ] (]) 23:41, 21 December 2024 (UTC)
*::We shouldn't have to jump through borderline manipulative and social-engineering hoops to get basic security functionality. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 04:40, 22 December 2024 (UTC)
*'''Oppose'''. It sounds like account recovery when 2FA is enabled involves Trust and Safety. I don't think page movers' account security is important enough to justify increasing the burden on them. <span style="white-space: nowrap;">—]&nbsp;<sup>(]·])</sup></span> 14:10, 21 December 2024 (UTC)
*:Losing access to the account is less common nowadays since most 2FA apps, including Google Authenticator, have implemented cloud syncing so that even if you lose your phone, you can still access the codes from another device. – ] (]) 14:40, 21 December 2024 (UTC)
*::But this isn't about Google Authenticator. ] (]) 02:58, 22 December 2024 (UTC)
*:::Google Authenticator is a 2FA app, which at least till some point used to be the most popular one. – ] (]) 07:07, 22 December 2024 (UTC)
*::::But (I believe), it is not available for use at Misplaced Pages. ] (]) 07:27, 22 December 2024 (UTC)
*:::::That's not true. You can use any ] authenticator app for MediaWiki 2FA. I currently use Ente Auth, having moved on from Authy recently, and from Google Authenticator a few years back. {{pb}}In case you're thinking of SMS-based 2FA, it has become a thing of the past and is not supported by MediaWiki either because it's insecure (attackers have ways to trick your network provider to send them your texts). – ] (]) 09:19, 22 December 2024 (UTC)
*'''Support'''. Even aside from the fact that, in 2024+, everyone should be able to turn on 2FA&nbsp;.... Well, {{em|absolutely certainly}} should everyone who has an advanced bit, with potential for havoc in the wrong hands, be able to use 2FA here. That also includes template-editor, edit-filter-manager, file-mover, account-creator (and supersets like event-coordinator), checkuser (which is not strictly tied to adminship), and probably also mass-message-sender, perhaps a couple of the others, too. Some of us old hands have several of these bits and are almost as much risk as an admin when it comes to loss of account control. <span style="white-space:nowrap;font-family:'Trebuchet MS'"> — ] ] ] 😼 </span> 04:40, 22 December 2024 (UTC)
*:Take a look at ] - much of what you mentioned is already in place, because these are groups that could use it '''and''' are widespread groups used on most WMF projects. (Unlike extendedmover). — ] <sup>]</sup> 17:22, 22 December 2024 (UTC)
*:Re {{tq|That also includes , file-mover, account-creator (and supersets like event-coordinator), and probably mass-message-sender}}. How can in any way would file mover, account creator, event coordinator and mass message sender user groups be considered privileged, and therefore have the <code>oathauth-enable</code> userright? ] (]) 17:37, 24 December 2024 (UTC)
*Comment: It is really not usual for 2FA to be available to a user group that is not defined as privileged in the WMF files. By default, all user groups defined at CommonSettings.php (iirc) that are considered to be privileged have the <code>oathauth-enable</code> right. Also, the account security practices mentioned in ] are also mentioned at ], despite not being discussed at all. Shouldn't it be fair to have the <code>extendedmover</code> userright be defined as privileged. ] (]) 08:33, 23 December 2024 (UTC)
*'''Support'''. Like SMcCandlish, I'd prefer that anyone, and particularly any editor with advanced perms, be allowed to turn on 2FA if they want (this is already an option on some social media platforms). But this is a good start, too.{{pb}}Since this is a proposal to allow page movers to ''opt in'' to 2FA, rather than a proposal to ''mandate'' 2FA for page movers, I see no downside in doing this. &ndash; ] (]) 17:02, 23 December 2024 (UTC)
*'''Support''' this opt-in for PMs and the broader idea of '''everyone having it by default'''. Forgive me if this sounds blunt, but is the responsibility and accountability of protecting ''your'' account lie on ''you'' and not WMF. Yes, they can assist in recovery, but the burden should not lie on them. <span style="font-family:monospace;font-weight:bold">]:&lt;]&gt;</span> 17:13, 23 December 2024 (UTC)


== Photographs by Peter Klashorst ==
*If the image's longest side is greater then 400px and the aspect ratio is greater then 1/3:


Back in 2023 I ] a group of nude photographs by ] for deletion on Commons. I was concerned that the people depicted might not have been of age or consented to publication. Klashorst described himself as a "painting sex-tourist" because he would travel to third-world countries to have sex with women in brothels, and also paint pictures of them. On his Flickr account, he posted various nude photographs of African and Asian women, some of which appear to have been taken without the subjects' knowledge. Over the years, other Commons contributors have raised concerns about the Klashorst photographs (e.g. ).
*Resize image so that the longest side is 350px using the Mediawiki resize algorithm (I.E. <nowiki>]</nowiki>)


I noticed recently that several of the Klashorst images had disappeared from Commons but the deletions hadn't been logged. I believe this happens when the WMF takes an office action to remove files. I don't know for sure whether that's the case, or why only a small number of the photographs were removed this way.
The bot is in trial, but I would like some more community consensus before I proceed. The discussion is at the bot's ]


My proposal is that we stop using nude or explicit photographs by Klashorst in all namespaces of the English Misplaced Pages. This would affect about thirty pages, including high-traffic anatomy articles such as ] and ]. ]] 18:29, 16 December 2024 (UTC)
Thanks, ] (]) 06:21, 31 January 2010 (UTC)


:@]: This seems as if it's essentially a request for a community sanction, and thus probably belongs better on the ]. Please tell me if I am mistaken. ]<sub>]<sub>]</sub></sub> (]/]) 23:12, 16 December 2024 (UTC)
== Sitenotice for Britain Loves Misplaced Pages? ==
::{{re|JJPMaster}} I am fine with moving the discussion elsewhere, if you think it more suitable. ]] 02:16, 17 December 2024 (UTC)
:{{ping|Genericusername57}} I disagree with JJPMaster in that this seems to be the right venue, but I also disagree with your proposal. Klashorst might have been a sleazeball, yes, but the images at the two listed articles do not show recognizable subjects, nor do they resemble “creepshots”, nor is there evidence they’re underage. If you object to his images you can nominate them on Commons. Your ‘23 mass nomination failed because it was extremely indiscriminate (i.e. it included a self portrait of the artist). ] (]) 00:30, 17 December 2024 (UTC)
:: {{re|Dronebogus}} According to ], Commons users repeatedly contacted Klashorst, asking him to provide proof of age and consent for his models, but he did not do so. I am planning on renominating the photographs on Commons, and I think removing them from enwiki first will help avoid spurious ] arguments. The self-portrait you are referring to also included another naked person. ]] 02:16, 17 December 2024 (UTC)
:::{{ping|Genericusername57}} replacing the ones at ] and ] wouldn’t be difficult; the first article arguably violates ] and conflicts with ] only showing a single image anyway. However I think it’s best if you went to those actual articles and discussed removing them. I don’t know what other pages use his images besides his own article but they should be dealt with separately. If you want to discuss banning his photos from Wikimedia in general that’s best discussed at Commons. In all cases my personal view is that regardless of whether they actually run afoul of any laws purging creepy, exploitative pornography of third-world women is no great loss. ] (]) 01:16, 18 December 2024 (UTC)
::::I have to confess that I do not remember the details of the attempts to clarify things with Peter. If this turns out to be something upon which this decision might turn, I will try to do more research. But I’m afraid it’s lost in the mists of time. ++]: ]/] 01:25, 24 December 2024 (UTC)
:::::Note also that further attempts to clarify matters directly with Peter will not be possible, as he is now deceased. ++]: ]/] 15:45, 24 December 2024 (UTC)
:Several issues here. First, if the files are illegal, that's a matter for Commons as they should be deleted. On the enwiki side of things, if there's doubt about legality, Commons has plenty of other photos that can be used instead. Just replace the photos. The second issue is exploitation. Commons does have ] which could apply, and depending on the country in which the photo was taken ], but it's a hard sell to delete things on Commons if it seems like the person in the photo consented (with or without payment). The problem with removing files that may be tainted by exploitation is we'd presumably have to remove basically all images of all people who were imprisoned, enslaved, colonized, or vulnerable at the time of the photo/painting/drawing. It becomes a balance where we consider the context of the image (the specifics of when/where/how it was taken), whether the subject is still alive (probably relevant here), and encyclopedic importance. I'd be inclined to agree with some above that there aren't many photos here that couldn't be replaced with something else from Commons, but I don't think you'll find support for a formalized ban. Here's a question: what happens when you just try to replace them. As long as the photo you're replacing it with is high quality and just as relevant to the article, I don't think you'd face many challenges? &mdash; <samp>] <sup style="font-size:80%;">]</sup></samp> \\ 16:20, 24 December 2024 (UTC)


== Move the last edited notice from the bottom of the page to somewhere that's easier to find ==
Hello. What would you think to having a site notice up for ? Something like:
: - a free photography competition - is running in 20 museums across the UK throughout February. Join in, take photos, win prizes!
One concern, I guess, would be that this would only directly apply to - but that's a fairly large percentage. It would of course be nice if it could be geolocated so that only British users could see it, but that isn't currently possible. I know that ] exists, but that appeals to a different audience than this (regular users cf. occasional visitors), and this is an event that would appeal to both really.


Currently, if you want to check when the last page edit was, you have to look at the edit history or scroll all the way to the bottom of the page and look for it near the licensing info. I propose moving it under the view history and watch buttons, across from the standard "This article is from Misplaced Pages" disclaimer. Non-technical users may be put off by the behind-the-scenes nature of the page or simply not know of its existence. The Mobile site handles this quiet gracefully in my opinion. While it is still at the bottom of the page, it isn't found near Licensing talk and is a noticeable portion of the page ] (]) 08:32, 17 December 2024 (UTC)
I know that this is a bit unusual, but I figure it's worth discussing. ] (]) 12:35, 31 January 2010 (UTC)


:Editors can already enable {{slink|mw:XTools#PageInfo gadget}}, which provides this information (and more) below the article title. I don't think non-editors would find it useful enough to be worth the space. ] (]) 18:12, 17 December 2024 (UTC)
:I say go ahead and do it, with one caveat: I'd ensure that the message was geolocated, somehow. While it's interesting to me that the Brits are doing this, as you essentially pointed out yourself there's no easy way for those outside of Western Europe to actually participate. I thought you guys used some sort of geolocation targeting for the fundraiser?<br/>—&nbsp;] (]&thinsp;&bull;&thinsp;]) 14:31, 31 January 2010 (UTC)


== I wished Misplaced Pages supported wallpapers in pages... ==
== Dealing with Petitions ==


It would be even more awesome if we could change the wallpaper of pages in Misplaced Pages. But the fonts' colors could change to adapt to the wallpaper. The button for that might look like this: ] ] ] (]) 11:02, 21 December 2024 (UTC)
There's recently been an outburst (I'd say epidemic, but I'm trying to be neutral here) of "petitions" started in order to address a few controversial issues from one perspective or another. I think that we ought to... well, I don't want to say "outlaw" them, but I can't think of a better term. I imagine that if we could get some wide support for such a stance that we could develop a policy and then MFD the dozen or so existent petitions.<br/>—&nbsp;] (]&thinsp;&bull;&thinsp;]) 14:41, 31 January 2010 (UTC)
:I was thinking along the same lines. I'm not a big fan of petitions (at least on Misplaced Pages). It doesn't seem constructive to me, to have a place where '''only''' support (or opposition) to a proposal can be stated. In fact it seems contrary to the general Misplaced Pages spirit. Proposals should be decided based on discussions where all sides can participate, rather than being open to influence by "political pressure", so to speak, of a bunch of one-sided petitions. <font face="Century Gothic">] <small>]</small> 15:20, 31 Jan 2010 (UTC)</font>
::] ? :) ] (]) 15:25, 31 January 2010 (UTC)
:::Perhaps all the "petitions" could be renamed "discussion" or "]" or similar, and a section for opposition added? Those who created the pages don't ], and I agree with Equazcion that a list of only those supporting something is useful. <font color="#00ACF4">╟─]]►]─╢</font> 15:39, 31 January 2010 (UTC)
::::The problem is when a page is created that does not allow for opposing views then it ceases to seek consensus and instead become a campaign to one point of view. I say if you want to do a petition print it out and go stand on the corner, Misplaced Pages is run by consensus not popular opinion. Any admin worth their salt will give '''zero''' credibility to any process that ignores consensus, so these petitions have meaningless results. ]<small> <sup>(Need help? ])</sup></small> 15:46, 31 January 2010 (UTC)


:I think we already tried this. It was called ] ;) —] (] • ]) 11:51, 21 December 2024 (UTC)
== Purposal to tag for sock puppetry in AfD ==
:See ] for information on creating your own stylesheet. ] (]) 18:03, 21 December 2024 (UTC)


== Change page titles/names using "LGBTQ" to "LGBTQ+" ==
I made this template for the use of tagging blocked or banned account's votes to show the vote was made by a sock puppet in order to further enforce policy such as ] and ]. An example of how the tag would work is shown below.
Please see my reasoning at ] (and please post your thoughts there). It was proposed that I use this page to escalate this matter, as seen on the linked talk page. ] (]) 20:42, 23 December 2024 (UTC)
* '''Keep''' Clearly . ] (]) 15:51, 31 January 2010 (UTC) {{SpiAfD|Andrew The Assasin}}
'''Note that the above text was just an example of the template in action.''' The purpose of this discussion is to decide whether this would be a good idea and should be part of the criteria for enforcing blocks and bans. ] (]) 15:51, 31 January 2010 (UTC)
:Given that you've only made I must confess to being a little suspicious of your own provenance... <font color="#FFB911">╟─]]►]─╢</font> 16:01, 31 January 2010 (UTC)
::It appears Joe Chill has been too, I just been accused of being John254. ] (]) 16:03, 31 January 2010 (UTC)

Latest revision as of 09:37, 25 December 2024

"WP:PROPOSE" redirects here. For proposing article deletion, see Misplaced Pages:Proposed deletion and Misplaced Pages:Deletion requests. Discussion page for new proposals
 Policy Technical Proposals Idea lab WMF Miscellaneous 
Shortcuts

The proposals section of the village pump is used to offer specific changes for discussion. Before submitting:

Discussions are automatically archived after remaining inactive for nine days.

« Archives, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215

Centralized discussion For a listing of ongoing discussions, see the dashboard.

RfC: Log the use of the HistMerge tool at both the merge target and merge source

Currently, there are open phab tickets proposing that the use of the HistMerge tool be logged at the target article in addition to the source article. Several proposals have been made:

  • Option 1a: When using Special:MergeHistory, a null edit should be placed in both the merge target and merge source's page's histories stating that a history merge took place.
    (phab:T341760: Special:MergeHistory should place a null edit in the page's history describing the merge, authored Jul 13 2023)
  • Option 1b: When using Special:MergeHistory, add a log entry recorded for the articles at the both HistMerge target and source that records the existence of a history merge.
    (phab:T118132: Merging pages should add a log entry to the destination page, authored Nov 8 2015)
  • Option 2: Do not log the use of the Special:MergeHistory tool at the merge target, maintaining the current status quo.

Should the use of the HistMerge tool be explicitly logged? If so, should the use be logged via an entry in the page history or should it instead be held in a dedicated log? — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)

Survey: Log the use of the HistMerge tool

  • Option 1a/b. I am in principle in support of adding this logging functionality, since people don't typically have access to the source article title (where the histmerge is currently logged) when viewing an article in the wild. There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. As for whether this is logged directly in the page history (as is done currently with page protection) or if this is merely in a separate log file, I don't have particularly strong feelings, but I do think that adding functionality to log histmerges at the target article would improve clarity in page histories. — Red-tailed hawk (nest) 15:51, 20 November 2024 (UTC)
  • Option 1a/b. No strong feelings on which way is best (I'll let the experienced histmergers comment on this), but logging a history merge definitely seems like a useful feature. Chaotic Enby (talk · contribs) 16:02, 20 November 2024 (UTC)
  • Option 1a/b. Choatic Enby has said exactly what I would have said (but more concisely) had they not said it first. Thryduulf (talk) 16:23, 20 November 2024 (UTC)
  • 1b would be most important to me but but 1a would be nice too. But this is really not the place for this sort of discussion, as noted below. Graham87 (talk) 16:28, 20 November 2024 (UTC)
  • Option 2 History merging done right should be seamless, leaving the page indistinguishable from if the copy-paste move being repaired had never happened. Adding extra annotations everywhere runs counter to that goal. Prefer 1b to 1a if we have to do one of them, as the extra null edits could easily interfere with the history merge being done in more complicated situations. * Pppery * it has begun... 16:49, 20 November 2024 (UTC)
    Could you expound on why they should be indistinguishable? I don't see how this could harm any utility. A log action at the target page would not show up in the history anyways, and a null edit would have no effect on comparing revisions. Aaron Liu (talk) 17:29, 20 November 2024 (UTC)
    Why shouldn't it be indistinguishable? Why it it necessary to go out of our way to say even louder that someone did something wrong and it had to be cleaned up? * Pppery * it has begun... 17:45, 20 November 2024 (UTC)
    All cleanup actions are logged to all the pages they affect. Aaron Liu (talk) 18:32, 20 November 2024 (UTC)
  • 2 History merges are already logged, so this survey name is somewhat off the mark. As someone who does this work: I do not think these should be displayed at either location. It would cause a lot of noise in history pages that people probably would not fundamentally understand (2 revisions for "please process this" and "remove tag" and a 3rd revision for the suggested log), and it would be "out of order" in that you will have merged a bunch of revisions but none of those revisions would be nearby the entry in the history page itself. I also find protections noisy in this way as well, and when moves end up causing a need for history merging, you end up with doubled move entries in the merged history, which also is confusing. Adding history merges to that case? No thanks. History merges are more like deletions and undeletions, which already do not add displayed content to the history view. Izno (talk) 16:54, 20 November 2024 (UTC)
    They presently are logged, but only at the source article. Take for example this entry. When I search for the merge target, I get nothing. It's only when I search the merge source that I'm able to get a result, but there isn't a way to know the merge source.
    If I don't know when or if the histmerge took place, and I don't know what article the history was merged from, I'd have to look through the entirety of the merge log manually to figure that out—and that's suboptimal. — Red-tailed hawk (nest) 17:05, 20 November 2024 (UTC)
    ... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)
    But ignoring that, why is it valuable to know this information? What do you gain? And is what you gain actually valuable to your end objective? For example, let's take your There have been several times I can think of when I've been going diff hunting or browsing page history and where some explicit note of a histmerge having occurred would have been useful. Is not the revisions left behind in the page history by both the person requesting and the person performing the histmerge not enough (see {{histmerge}})? There are history merges done that don't have that request format such as the WikiProject history merge format, but those are almost always ancient revisions, so what are you gaining there? And where they are not ancient revisions, they are trivial kinds of the form "draft x -> page y, I hate that I even had to interact with this history merge it was so trivial (but also these are great because I don't have to spend significant time on them)". Izno (talk) 17:32, 20 November 2024 (UTC)

    ... Page moves do the same thing, only log the move source. Yet this is not seen as an issue? :)

    I don't think everyone would necessarily agree (see Toadspike's comment below). Chaotic Enby (talk · contribs) 17:42, 20 November 2024 (UTC)
    Page moves do leave a null edit on the page that describes where the page was moved from and was moved to. And it's easy to work backwards from there to figure out the page move history. The same cannot be said of the Special:MergeHistory tool, which doesn't make it easy to re-construct what the heck went on unless we start diving naïvely through the logs. — Red-tailed hawk (nest) 17:50, 20 November 2024 (UTC)
    It can be *possible* to find the original history merge source page without looking through the merge log, but the method for doing so is very brittle and extremeley hacky. Basically, look for redirects to the page using "What links here", and find the redirect whose first edit has an unusual byte difference. This relies on the redirect being stable and not deleted or retargetted. There is also another way that relies on byte difference bugs as described in the above-linked discussion by wbm1058. Both of those are ... particularly awful. Graham87 (talk) 03:48, 21 November 2024 (UTC)
    In the given example, the history-merge occurred here. Your "log" is the edit summaries. "Created page with '..." is the edit summary left by a normal page creation. But wait, there is page history before the edit that created the page. How did it get there? Hmm, the previous edit summary "Declining submission: v - Submission is improperly sourced (AFCH)" tips you off to look for the same title in draft: namespace. Voila! Anyone looking for help with understanding a particular merge may ask me and I'll probably be able to figure it out for you. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)
    Here's another example, of a merge within mainspace. The automatic edit summary (created by the MediaWiki software) of this (No difference) diff "Removed redirect to Jordan B. Acker" points you to the page that was merged at that point. Voila. Voila. Voila. – wbm1058 (talk) 13:44, 21 November 2024 (UTC)
    There are times where those traces aren't left. Aaron Liu (talk) 13:51, 21 November 2024 (UTC)
    Here's another scenario, this one from WP:WikiProject History Merge. The page history shows an edit adding +5,800 bytes, leaving the page with 5,800 bytes. But the previous edit did not leave a blank page. Some say this is a bug, but it's also a feature. That "bug" is actually your "log" reporting that a hist-merge occurred at that edit. Voila, the log for that page shows a temp delete & undelete setting the page up for a merge. The first item on the log:
    @ 20:14, 16 January 2021 Tbhotch moved page Flag of Yucatán to Flag of the Republic of Yucatán (Correct name)
    clues you in to where to look for the source of the merge. Voila, that single edit which removed −5,633 bytes tells you that previous history was merged off of that page. The log provides the details. – wbm1058 (talk) 16:03, 21 November 2024 (UTC)
    (phab:T76557: Special:MergeHistory causes incorrect byte change values in history, authored Dec 2 2014) — Preceding unsigned comment added by Wbm1058 (talkcontribs) 18:13, 21 November 2024 (UTC)
    Again, there are times where the clues are much harder to find, and even in those cases, it'd be much better to have a unified and assured way of finding the source. Aaron Liu (talk) 16:11, 21 November 2024 (UTC)
    Indeed. This is a prime example of an unintended undocumented feature. Graham87 (talk) 08:50, 22 November 2024 (UTC)
    Yeah. I don't think that we can permanently rely on that, given that future versions of MediaWiki are not bound in any real way to support that workaround. — Red-tailed hawk (nest) 04:24, 3 December 2024 (UTC)
  • Support 1b (log only), oppose 1a (null edit). I defer to the experienced histmergers on this, and if they say that adding null edits everywhere would be inconvenient, I believe them. However, I haven't seen any arguments against logging the histmerge at both articles, so I'll support it as a sensible idea. (On a similar note, it bothers me that page moves are only logged at one title, not both.) Toadspike 17:10, 20 November 2024 (UTC)
  • Option 2. The merges are already logged, so there’s no reason to add it to page histories. While it may be useful for habitual editors, it will just confuse readers who are looking for an old revision and occasional editors. Ships & Space(Edits) 18:33, 20 November 2024 (UTC)
    But only the source page is logged as the "target". IIRC it currently can be a bit hard to find out when and who merged history into a page if you don't know the source page and the mergeperson didn't leave any editing indication that they merged something. Aaron Liu (talk) 18:40, 20 November 2024 (UTC)
  • 1B. The present situation of the action being only logged at one page is confusing and unhelpful. But so would be injecting null-edits all over the place.  — SMcCandlish ¢ 😼  01:38, 21 November 2024 (UTC)
  • Option 2. This exercise is dependent on finding a volunteer MediaWiki developer willing to work on this. Good luck with that. Maybe you'll find one a decade from now. – wbm1058 (talk) 05:51, 21 November 2024 (UTC)
    And, more importantly, someone in the MediaWiki group to review it. I suspect there are many people, possibly including myself, who would code this if they didn't think they were wasting their time shuffling things from one queue to another. * Pppery * it has begun... 06:03, 21 November 2024 (UTC)
    That link requires a Gerrit login/developer account to view. It was a struggle to get in to mine (I only have one because of an old Toolforge account and I'd basically forgotten about it), but for those who don't want to go through all that, that group has only 82 members (several of whose usernames I recognise) and I imagine they have a lot on their collective plate. There's more information about these groups at Gerrit/Privilege policy on MediaWiki. Graham87 (talk) 15:38, 21 November 2024 (UTC)
    Sorry, I totally forgot Gerrit behaved in that counterintuitive way and hid public information from logged out users for no reason. The things you miss if Gerrit interactions become something you do pretty much every day. If you want to count the members of the group you also have to follow the chain of included groups - it also includes https://ldap.toolforge.org/group/wmf, https://ldap.toolforge.org/group/ops and the WMDE-MediaWiki group (another login-only link), as well as a few other permission edge cases (almost all of which are redundant because the user is already in the MediaWiki group) * Pppery * it has begun... 18:07, 21 November 2024 (UTC)
  • Support 1a/b, and I would encourage the closer to disregard any opposition based solely on the chances of someone ever actually implementing it. —Compassionate727  12:52, 21 November 2024 (UTC)
    Fine. This stupid RfC isn't even asking the right questions. Why did I need to delete (an expensive operation) and then restore a page in order to "set up for a history merge" Should we fix the software so that it doesn't require me to do that? Why did the page-mover resort to cut-paste because there was page history blocking their move, rather than ask a administrator for help? Why doesn't the software just let them move over that junk page history themselves, which would negate the need for a later hist-merge? (Actually in this case the offending user only has made 46 edits, so they don't have page-mover privileges. But they were able to move a page. They just couldn't move it back a day later after they changed their mind.) wbm1058 (talk) 13:44, 21 November 2024 (UTC)
    Yeah, revision move would be amazing, for a start. Graham87 (talk) 15:38, 21 November 2024 (UTC)
  • Option 1b – changes to a page's history should be listed in that page's log. There's no need to make a null edit; pagemove null edits are useful because they meaningfully fit into the page's revision history, which isn't the case here. jlwoodwa (talk) 00:55, 22 November 2024 (UTC)
  • Option 1b sounds best since that's what those in the know seem to agree on, but 1a would probably be OK. Abzeronow (talk) 03:44, 23 November 2024 (UTC)
  • Option 1b seems like the one with the best transparency to me. Thanks. Huggums 06:59, 25 November 2024 (UTC)

Discussion: Log the use of the HistMerge tool

CheckUser for all new users

All new users (IPs and accounts) should be subject to CheckUser against known socks. This would prevent recidivist socks from returning and save the time and energy of users who have to prove a likely case at SPI. Recidivist socks often get better at covering their "tells" each time making detection increasingly difficult. Users should not have to make the huge effort of establishing an SPI when editing from an IP or creating a new account is so easy. We should not have to endure Misplaced Pages:Long-term abuse/HarveyCarter, Misplaced Pages:Sockpuppet investigations/Phạm Văn Rạng/Archive or Misplaced Pages:Sockpuppet investigations/Orchomen/Archive if CheckUser can prevent them. Mztourist (talk) 04:06, 22 November 2024 (UTC)

I'm pretty sure that even if we had enough checkuser capacity to routinely run checks on every new user that doing so would be contrary to global policy. Thryduulf (talk) 04:14, 22 November 2024 (UTC)
Setting aside privacy issues, the fact that the WMF wouldn't let us do it, and a few other things: Checking a single account, without any idea of who you're comparing them to, is not very effective, and the worst LTAs are the ones it would be least effective against. This has been floated several times in the much narrower context of adminship candidates, and rejected each time. It probably belongs on WP:PEREN by now. -- Tamzin (they|xe) 04:21, 22 November 2024 (UTC)
Why can't it be automated? What are the privacy issues and what would WMF concerns be? There has to be a better system than SPI which imposes a huge burden on the filer (and often fails to catch socks) while we just leave the door open for LTAs. Mztourist (talk) 04:39, 22 November 2024 (UTC)
How would it be automated? We can't just block everyone who even sometimes shares an IP with someone, which is most editors once you factor in mobile editing and institutional WiFi. Even if we had a system that told checkusers about all shared-IP situations and asked them to investigate, what are they investigating for? The vast majority of IP overlaps will be entirely innocent, often people who don't even know each other. There's no way for a checkuser to find any signal in all that noise. So the only way a system like this would work is if checkusers manually identified IP ranges that are being used by LTAs, and then placed blocks on those ranges to restrict them from account creation... Which is what already happens. -- Tamzin (they|xe) 04:58, 22 November 2024 (UTC)
I would assume that IT experts can work out a way to automate CheckUser. If someone edits on a shared IP used by a previous sock that should be flagged and human CheckUsers notified so they can look at the edits and the previous sock edits and warn or block as necessary. Mztourist (talk) 05:46, 22 November 2024 (UTC)
We already have autoblock. For cases it doesn't catch, there's an additional manual layer of blocking, where if a sock is caught on an IP that's been used before but wasn't caught by autoblock, a checkuser will block the IP if it's technically feasible, sometimes for months or years at a time. Beyond that, I don't think you can imagine just how often "someone edits on a shared IP used by a previous sock". I'm doing that right now, probably, because I'm editing through T-Mobile. Basically anyone who's ever edited in India or Nigeria has been on an IP used by a previous sock. Basically anyone who's used a large institution's WiFi. There is not any way to weed through all that noise with automation. -- Tamzin (they|xe) 05:54, 22 November 2024 (UTC)
Addendum: An actually potentially workable innovation would be something like a system that notifies CUs if an IP is autoblocked more than once in a certain time period. That would be a software proposal for Phabricator, though, not an enwiki policy proposal, and would still have privacy implications that would need to be squared with the WMF. -- Tamzin (they|xe) 05:57, 22 November 2024 (UTC)
I believe Tamzin has it about right, but I want to clarify a thing. If you're hypothetically using T-Mobile (and this also applies to many other ISPs and many LTAs) then the odds are very high that you're using an IP address which has never been used before. With T-Mobile, which is not unusually large by any means, you belong to at least one /32 range which contains a number of IP addresses so big that it has 30 digits. These ranges contain a huge number of users. At the other extreme you have some countries with only a handful of IPs, which everyone uses. These IPs also typically contain a huge number of users. TLDR; is someone is using a single IP on their own then we'll probably just block it, otherwise you're talking about matching a huge number of users. -- zzuuzz 03:20, 23 November 2024 (UTC)
As I understand it, if you're hypothetically using T-Mobile, then you're not editing, because someone range-blocked the whole network in pursuit of a vandal(s). See Misplaced Pages:Advice to T-Mobile IPv6 users. WhatamIdoing (talk) 03:36, 23 November 2024 (UTC)
T-Mobile USA is a perennial favourite of many of the most despicable LTAs, but that's besides the point. New users with an account can actually edit from T-Mobile. They can also edit from Jio, or Deutsche Telecom, Vodafone, or many other huge networks. -- zzuuzz 03:50, 23 November 2024 (UTC)
Would violate the policy WP:NOTFISHING. –Novem Linguae (talk) 04:43, 22 November 2024 (UTC)
It would apply to every new User as a protective measure against sockpuppetry, like a credit check before you get a card/overdraft. WP:NOTFISHING is archaic like the whole burdensome SPI system that forces honest users to do all the hard work of proving sockpuppetry while socks and vandals just keep being welcomed in under WP:AGF. Mztourist (talk) 05:46, 22 November 2024 (UTC)
What you're suggesting is to just inundate checkusers with thousands of cases. The suggestion (as I understand it) removes burden from SPI filers by adding a disproportional burden on checkusers, who are already an overworked group. If you're suggesting an automated solution, then I believe IP blocks/IP range blocks and autoblock (discussed by Tamzin, above) already cover enough. It's quite hard to weigh up what you're really suggesting because it feels very vague without much detail - it sounds like you're just saying "a new SPI should be opened for every new user and IP, forever" which is not really a workable solution (for instance, 50 accounts were made in the last 15 minutes, which is about one every 18 seconds) BugGhost🦗👻 18:12, 22 November 2024 (UTC)
And most of those accounts will make zero, one, or two edits, and then never be used again. Even if we liked this idea, doing it for every single account creation would be a waste of resources. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)
No, they should not. voorts (talk/contributions) 17:23, 22 November 2024 (UTC)
This, very bluntly, flies in the face of WMF policy with regards to use/protection of PII, and as noted by Tamzin this would result in frankly obscene amounts of collateral damage. You have absolutely no idea how frequently IP addresses get passed around (especially in the developing world or on T Mobile), such that it could feasibly have three different, unrelated, people on it over the course of a day or so. —Jéské Couriano v^_^v 18:59, 22 November 2024 (UTC)
 Just out of curiosity: If a certain case of IPs spamming at Help Desk is any indication, would a CU be able to stop that in its track? 2601AC47 (talk|contribs) Isn't a IP anon 14:29, 23 November 2024 (UTC)
CU's use their tools to identify socks when technical proof is necessary. The problem you're linking to is caused by one particular LTA account who is extremely obvious and doesn't really require technical proof to identify - check users would just be able to provide evidence for something that is already easy to spot. There's an essay on the distinction over at WP:DUCK BugGhost🦗👻 14:45, 23 November 2024 (UTC)
@2601AC47: No, and that is because the user in question's MO is to abuse VPNs. Checkuser is worthless in this case because of that (but the IPs can and should be blocked for 1yr as VPNs). —Jéské Couriano v^_^v 19:35, 26 November 2024 (UTC)
LTA MAB is using a peer-to-peer VPN service which is similar to TOR. Blocking peer-to-peer VPN service endpoint IP addresses carries a higher risk of collateral damage because those aren't assigned to the VPN provider but rather a third party ISP who is likely to dynamically reassign the blocked address to a completely innocent party. 216.126.35.235 (talk) 00:22, 27 November 2024 (UTC)
I slightly oppose this idea. This is not Reddit where socks are immediately banned or shadowbanned outright. Reddit doesn't have WP:DUCK as any wiki does. Ahri Boy (talk) 00:14, 25 November 2024 (UTC)
How do you know this is how Reddit deals with ban and suspension evasion? They use advanced techniques such as device and IP fingerprinting to ban and suspend users in under an hour. 2600:1700:69F1:1410:5D40:53D:B27E:D147 (talk) 23:47, 28 November 2024 (UTC)
I can see where this is coming from, but we must realise that checkuser is not magic pixie dust nor is it meant for fishing. - Ratnahastin (talk) 04:49, 27 November 2024 (UTC)
The question I ask myself is why must we realize that it is not meant for fishing? To catch fish, you need to fish. The no-fishing rule is not fit for purpose, nor is it a rule that other organizations that actively search for ban evasion use. Machines can do the fishing. They only need to show us the fish they caught. Sean.hoyland (talk) 05:24, 27 November 2024 (UTC)
I think for the same reason we don't want governments to be reading our mail and emails. If we checkuser everybody, then nobody has any privacy. Donald Albury 20:20, 27 November 2024 (UTC)

I sympathize with Mztourist. The current system is less effective than it needs to be. Ban evading actors make a lot of edits, they are dedicated hard-working folk in contentious topic areas. They can make up nearly 10% of new extendedconfirmed actors some years and the quicker an actor becomes EC the more likely they are to be blocked later for ban evasion. Their presence splits the community into two classes, the sanctionable and the unsanctionable with completely different payoff matrices. This has many consequences in contentious topic areas and significantly impacts the dynamics. The current rules are probably not good rules. Other systems have things like a 'commitment to authenticity' and actively search for ban evasion. It's tempting to burn it all down and start again, but with what? Having said that, the SPI folks do a great job. The average time from being granted extendedconfirmed to being blocked for ban evasion seems to be going down. Sean.hoyland (talk) 18:28, 22 November 2024 (UTC)

I confess that I am doubtful about that 10% claim. WhatamIdoing (talk) 23:43, 22 November 2024 (UTC)
WhatamIdoing, me too. I'm doubtful about everything I say because I've noticed that the chance it is slightly to hugely wrong is quite high. The EC numbers are work in progress, but I got distracted. The description "nearly 10% of new extendedconfirmed actors" is a bit misleading, because 'new' doesn't really mean new actors. It means actors that acquired EC for a given year, so newly acquired privileges. They might have registered in previous years. Also, I don't have 100% confidence in the way count EC grants because there are some edge cases, and I'm ignoring sysops. But anyway, the statement was based on this data of questionable precision. And the statement about a potential relationship between speed of EC acquisition and probability of being blocked is based on this data of questionable precision. And of course, currently undetected socks are not included, and there will be many. Sean.hoyland (talk) 03:39, 23 November 2024 (UTC)
I'm not interested in clicking through to a Google file. Here's my back-of-the-envelope calculation: We have something like 120K accounts that would qualify for EXTCONF. Most of these are no longer active, and many stopped editing so long ago that they don't actually have the user right.
Misplaced Pages is almost 24 years old. That makes convenient math: On average, since inception, 5K editors have achieved EXTCONF levels each year.
If the 10% estimate is true, then 500 accounts per year – about 10 per week – are being created by banned editors and going undetected long enough for the accounts to make 500 edits and to work in CTOP areas. Do we even have enough WP:BANNED editors to make it plausible to expect banned editors to bring 500 accounts a year up to EXTCONF levels (plus however many accounts get started but are detected before then)? WhatamIdoing (talk) 03:53, 23 November 2024 (UTC)
Suit yourself. I'm not interested in what interests other people or back of the envelope calculations. I'm interested in understanding the state of a system over time using evidence-based approaches by extracting data from the system itself. Let the data speak for itself. It has a lot to tell us. Then it is possible to test hypotheses and make evidence-based decisions. Sean.hoyland (talk) 04:13, 23 November 2024 (UTC)
@WhatamIdoing, there's a sockmaster in the IPA CTOP who has made more than 100 socks. 500 new XC socks every year doesn't seem that much of a stretch in comparison. -- asilvering (talk) 19:12, 23 November 2024 (UTC)
More than 100 XC socks? Or more than 100 detected socks, including socks with zero edits?
Making a lot of accounts isn't super unusual, but it's a lot of work to get 100 accounts up to 500+ edits. Making 50,000 edits is a lot, even if it's your full-time job. WhatamIdoing (talk) 01:59, 24 November 2024 (UTC)
Lots of users get it done in a couple of days, often through vandal fighting tools. It really is not that many when the edits are mostly mindless. nableezy - 00:18, 26 November 2024 (UTC)
But that's kind of my point: "A couple of days", times 100 accounts, means 200–300 days per year. If you work five days per week and 52 weeks per year, that's 260 work days. This might be possible, but it's a full-time job.
Since the 30-day limit is something that can't be achieved through effort, I wonder if a sudden change to, say, 6 months would produce a five-month reprieve. WhatamIdoing (talk) 02:23, 26 November 2024 (UTC)
Who says it’s only one at a time? Icewhiz for example has had 4 plus accounts active at a time. nableezy - 02:25, 26 November 2024 (UTC)
There is some data about ban evasion timelines for some sockmasters in PIA that show how accounts are operated in parallel. Operating multiple accounts concurrently seems to be the norm. Sean.hoyland (talk) 04:31, 26 November 2024 (UTC)
Imagine that it takes an average of one minute to make a (convincing) edit. That means that 500 edits = 8.33 hours, i.e., more than one full work day.
Imagine, too, that having reached this point, you actually need to spend some time using your newly EXTCONF account. This, too, takes time.
If you operate several accounts at once, that means:
You spend an hour editing from Account1. You spend the next hour editing from Account2. You spend another hour editing from Account3. You spend your fourth hour editing from Account4. Then you take a break for lunch, and come back to edit from Accounts 5 through 8.
At the end of the day, you have brought 8 accounts up to 60 edits (12% of the minimum goal). And maybe one of them got blocked, too, which is lost effort. At this rate, it would take you an entire year of full-time work to get 100 EXTCONF accounts, even though you are operating multiple accounts concurrently. Doing 50 edits per day in 10 accounts is not faster than doing 500 edits in 1 account. It's the same amount of work. WhatamIdoing (talk) 05:13, 29 November 2024 (UTC)
Sure it’s an effort, though it doesn’t take a minute an edit. But I’m not sure why I need to imagine something that has happened multiple times already. Icewhiz most recently had like 4-5 EC accounts active, and there are probably several more. Yes, there is an effort there. But also yes, it keeps happening. nableezy - 15:00, 29 November 2024 (UTC)
My point is that "4-5 EC accounts" is not "100". WhatamIdoing (talk) 19:31, 30 November 2024 (UTC)
It’s 4-5 at a time for a single sock master. Check the Icewhiz SPI for how many that adds up to over time. nableezy - 20:16, 30 November 2024 (UTC)
Many of our frequent fliers are already adept at warehousing accounts for months or even years, so a bump in the time period probably won't make much off a difference. Additionally, and without going into detail publicly, there are several methods whereby semi- or even fully-automated editing can be used to get to 500 edits with a minimum of effort, or at least well within script-kid territory. Because so many of those are obvious on inspection some will assume that all of them are, but there are a number of rather subtle cases that have come up over the years and it would be foolish to assume that it isn't ongoing. 184.152.68.190 (talk) 17:31, 28 November 2024 (UTC)

Also, if we divide the space into contentious vs not-contentious, maybe a one size fits all CU policy doesn't make sense. Sean.hoyland (talk) 18:55, 22 November 2024 (UTC)

Terrible idea. Let's AGF that most new users are here to improve Misplaced Pages instead of damage it. Some1 (talk) 18:33, 22 November 2024 (UTC)

Ban evading actors who employ deception via sockpuppetry in the WP:PIA topic area are here to improve Misplaced Pages, from their perspective, rather than damage it. There is no need to use faith. There are statistics. There is a probability that a 'new user' is employing ban evasion. Sean.hoyland (talk) 18:46, 22 November 2024 (UTC)
My initial comment wasn't a direct response to yours, but new users and IPs won't be able to edit in the WP:PIA topic area anyway since they need to be extended confirmed. Some1 (talk) 20:08, 22 November 2024 (UTC)
Let's not hold up the way PIA handles new users and IPs, in which they are allowed to post to talk pages but then have their talk page post removed if it doesn't fall within very specific parameters, as some sort of model. CMD (talk) 02:51, 23 November 2024 (UTC)

Strongly support automatically checkusering all active users (new and existing) at regular intervals. If it were automated -- e.g., a script runs that compares IPs, user agent, other typical subscriber info -- there would be no privacy violation, because that information doesn't have to be disclosed to any human beings. Only the "hits" can be forwarded to the CU team for follow-up. I'd run that script daily. If the policy forbids it, we should change the policy to allow it. It's mind-boggling that Misplaced Pages doesn't do this already. It's a basic security precaution. (Also, email-required registration and get rid of IP editing.) Levivich (talk) 02:39, 23 November 2024 (UTC)

I don't think you've been reading the comments from people who know what they are talking about. There would be hundreds, at least, of hits per day that would require human checking. The policy that prohibits this sort of massive breach of privacy is the Foundation's and so not one that en.wp could change even if it were a good idea (which it isn't). Thryduulf (talk) 03:10, 23 November 2024 (UTC)
A computer can be programmed to check for similarities or patterns in subscriber info (IP, etc), and in editing activity (time cards, etc), and content of edits and talk page posts (like the existing language similarity tool), with various degrees of certainty in the same way the Cluebot does with ORES when it's reverting vandalism. And the threshold can be set so it only forwards matches of a certain certainty to human CUs for review, so as not to overwhelm the humans. The WMF can make this happen with just $1 million of its $180 million per year (and it wouldn't be violating its own policies if it did so). Enwiki could ask for it, other projects might join too. Levivich (talk) 05:24, 23 November 2024 (UTC)
"Oh now I see what you mean, Levivich, good point, I guess you know what you're talking about, after all."
"Thanks, Thryduulf!" Levivich (talk) 17:42, 23 November 2024 (UTC)
I seem to have missed this comment, sorry. However I am very sceptical that sockpuppet detection is meaningfully automatable. From what CUs say it is as much art as science (which is why SPI cases can result in determinations like "possilikely"). This is the sort of thing that is difficult (at best) to automate. Additionally the only way to reliably develop such automation would be for humans analyse and process a massive amount of data from accounts that both are and are not sockpuppets and classify results as one or the other, and that anaylsis would be a massive privacy violation on its own. Assuming you have developed this magic computer that can assign a likelihood of any editor being a sock of someone who has edited in the last three months (data older than that is deleted) on a percentage scale, you then have to decide what level is appropriate to send to humans to check. Say for the sake of argument it is 75%, that means roughly one in four people being accused are innocent and are having their privacy impinged unnecessarily - and how many CUs are needed to deal with this caseload? Do we have enough? SPI isn't exactly backlog free and there aren't hoards of people volunteering for the role (although unbreaking RFA might help with this in the medium to long term). The more you reduce the number sent to CUs to investigate, the less benefit there is over the status quo.
In addition to all the above, how similar is "similar" in terms of articles edited, writing style, timecard, etc? How are you avoiding legitimate sockpuppets? Thryduulf (talk) 18:44, 23 November 2024 (UTC)
You know this already but for anyone reading this who doesn't: when a CU "checks" somebody, it's not like they send a signal out to that person's computer to go sniffing around. In fact, all the subscriber info (IP address, etc.) is already logged on the WMF's server logs (as with any website). A CU "check" just means a volunteer CU gets to look at a portion of those logs (to look up a particular account's subscriber info). That's the privacy concern: we have rules, rightfully so, about when volunteer CUs (not WMF staff) can read the server logs (or portions of them). Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs. Privacy is only an issue when those logs are revealed to volunteer CUs.
So... feeding the logs into software in order to train the software doesn't violate anyone's policy. It's just letting a computer read its own files. Human verification of the training outcomes also doesn't have to violate anyone's privacy -- just don't use volunteer CUs to do it, use WMF staff. Or, anonymize the training data (changing usernames to "Example1", "Example2", etc.). Or use historical data -- which would certainly be part of the training, since the most effective way would be to put known socks into the training data to see if the computer catches them.
Anyway, training the system won't violate anyone's privacy.
As for the hit rate -- 75% would be way, way too low. We'd be looking for definitely over 90% or 95%, and probably more like 99.something percent. Cluebot doesn't get vandalism wrong 1 out of 4 times, neither should CluebotCU. Heck, if CluebotCU can't do better than 75%, it's not worth doing. A more interesting question is whether the 99.something% hit rate would be helpful to CUs, or whether that would only catch the socks that are so obvious you don't even need CU to recognize them. Only testing in the field would tell.
But overall, AI looking for patterns, and checking subscriber info, edit patterns, and the content of edits, would be very helpful in tamping down on socking, because the computer can make far more checks than a human (a computer can look at 1,000 accounts and a 100,000 edits no problem, which no human can do), it'll be less biased than humans, and it can do it all without violating anyone's privacy -- in fact, lowering the privacy violations by lowering the false positives, sending only high-probability (90%+, not 75%+) to humans for review. And it can all be done with existing technology, and the WMF has the money to do it. Levivich (talk) 19:38, 23 November 2024 (UTC)
The more you write the clearer you make it that you don't understand checkuser or the WMF's policies regarding privacy. It's also clear that I'm not going to convince you that this is unworkable so I'll stop trying. Thryduulf (talk) 20:42, 23 November 2024 (UTC)
Yeah it's weird how repeatedly insulting me hasn't convinced me yet. Levivich (talk) 20:57, 23 November 2024 (UTC)
If you are are unable to distinguish between reasoned disagreement and insults, then it's not at all weird that reasoned disagreement fails to convince you. Thryduulf (talk) 22:44, 23 November 2024 (UTC)
@Levivich: Whatever existing data set we have has too many biases to be useful for this, and this is going to be prone to false positives. AI needs lots of data to be meaningfully trained. Also, AI here would be learning a function; when the output is not in fact a function of the input, there's nothing for an AI model to target, and this is very much the case here. On Wikidata, where I am a CheckUser, almost all edit summaries are automated even for human edits (just like clicking the rollback button is, or undoing an edit is by default), and it is very hard to meaningfully tell whether someone is a sock or not without highly case-specific analysis. No AI model is better than the data it's trained on.
Also, about the privacy policy: you are completely incorrect when you "Those rules do not apply to WMF staff, like devs and maintenance personnel, nor do they apply to the WMF's own software reading its own logs". Staff can only access that information on a need to know basis, just like CheckUsers, and data privacy laws like the EU's and California's means you cannot just do whatever random thing you want with the information you collect from users about them.--Jasper Deng (talk) 21:56, 23 November 2024 (UTC)
So which part of the wmf:Privacy Policy would prohibit the WMF from developing an AI that looks at server logs to find socks? Do you want me to quote to you the portions that explicitly disclose that the WMF uses personal information to develop tools and improve security? Levivich (talk) 22:02, 23 November 2024 (UTC)
I mean yeah that would probably be more productive than snarky bickering BugGhost🦗👻 22:05, 23 November 2024 (UTC)
@Levivich: Did you read the part where I mentioned privacy laws? Also, in this industry no one is allowed unfettered usage of private data even internally; there are internal policies that govern this that are broadly similar to the privacy policy. It's one thing to test a proposed tool on an IP address like Special:Contribs/2001:db8::/32, but it's another to train an AI model on it. Arguably an equally big privacy concern is the usage of new data from new users after the model is trained and brought online. The foundation is already hiding IP addresses by default even for anonymous users soon, and they will not undermine that mission through a tool like this. Ultimately, the Board of Trustees has to assume legal responsibility and liability for such a thing; put yourself in their position and think of whether they'd like the liability of something like this.--Jasper Deng (talk) 22:13, 23 November 2024 (UTC)
So can you quote a part of the privacy policy, or a part of privacy laws, or anything, that would prohibit feeding server logs into a "Cluebot-CU" to find socking?
Because I can quote the part of the wmf:Privacy Policy that allows it, and it's a lot:

We may use your public contributions, either aggregated with the public contributions of others or individually, to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ...

Because of how browsers work, we receive some information automatically when you visit the Wikimedia Sites ... This information includes the type of device you are using (possibly including unique device identification numbers, for some beta versions of our mobile applications), the type and version of your browser, your browser's language preference, the type and version of your device's operating system, in some cases the name of your internet service provider or mobile carrier, the website that referred you to the Wikimedia Sites, which pages you request and visit, and the date and time of each request you make to the Wikimedia Sites.

Put simply, we use this information to enhance your experience with Wikimedia Sites. For example, we use this information to administer the sites, provide greater security, and fight vandalism; optimize mobile applications, customize content and set language preferences, test features to see what works, and improve performance; understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites, and analyze trends. ...

We actively collect some types of information with a variety of commonly-used technologies. These generally include tracking pixels, JavaScript, and a variety of "locally stored data" technologies, such as cookies and local storage. ... Depending on which technology we use, locally stored data may include text, Personal Information (like your IP address), and information about your use of the Wikimedia Sites (like your username or the time of your visit). ... We use this information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. ...

We and our service providers use your information ... to create new features or data-related products for you or to learn more about how the Wikimedia Sites are used ... To fight spam, identity theft, malware and other kinds of abuse. ... To test features to see what works, understand how users interact with the Wikimedia Sites, track and study use of various features, gain understanding about the demographics of the different Wikimedia Sites and analyze trends. ...

When you visit any Wikimedia Site, we automatically receive the IP address of the device (or your proxy server) you are using to access the Internet, which could be used to infer your geographical location. ... We use this location information to make your experience with the Wikimedia Sites safer and better, to gain a greater understanding of user preferences and their interaction with the Wikimedia Sites, and to generally improve our services. For example, we use this information to provide greater security, optimize mobile applications, and learn how to expand and better support Wikimedia communities. ...

We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use, this Privacy Policy, or any Wikimedia Foundation or user community-based policies. ... We may also disclose your Personal Information if we reasonably believe it necessary to detect, prevent, or otherwise assess and address potential spam, malware, fraud, abuse, unlawful activity, and security or technical concerns. ... To facilitate their work, we give some developers limited access to systems that contain your Personal Information, but only as reasonably necessary for them to develop and contribute to the Wikimedia Sites. ...

Yeah that's a lot. Then there's this whole FAQ that says

It is important for us to be able to make sure everyone plays by the same rules, and sometimes that means we need to investigate and share specific users' information to ensure that they are.

For example, user information may be shared when a CheckUser is investigating abuse on a Project, such as suspected use of malicious "sockpuppets" (duplicate accounts), vandalism, harassment of other users, or disruptive behavior. If a user is found to be violating our Terms of Use or other relevant policy, the user's Personal Information may be released to a service provider, carrier, or other third-party entity, for example, to assist in the targeting of IP blocks or to launch a complaint to the relevant Internet Service Provider.

So using IP addresses, etc., to develop new tools, to test features, to fight violations of the Terms of Use, and disclosing that info to Checkusers... all explicitly permitted by the Privacy Policy. Levivich (talk) 22:22, 23 November 2024 (UTC)
@Levivich: "We, or particular users with certain administrative rights as described below, need to use and share your Personal Information if it is reasonably believed to be necessary to enforce or investigate potential violations of our Terms of Use" – "reasonably believed to be necessary" is not going to hold up in court when it's sweepingly applied to everyone. This doesn't even take into consideration the laws I mentioned, like GDPR. I'm not a lawyer, and I'm guessing neither are you. If you want to be the one assuming the legal liability for this, contact the board today and sign the contract. Even then they would probably not agree to such an arrangement. So you're preaching to the choir: only the foundation could even consider assuming this risk. Also, it's clear that you do not have a single idea of how developing something like this works if you think it can be done for $1 million. Something this complex has to be done right and tech salaries and computing resources are expensive.--Jasper Deng (talk) 22:28, 23 November 2024 (UTC)
What I am suggesting does not involve sharing everyone's data with Checkusers. It's pretty obvious that looking at their own server logs is "necessary to enforce or investigate potential violations of our Terms of Use". Five people is how big the WMF's wmf:Machine Learning team is, @ $200k each, $1m/year covers it. Five people is enough for that team to improve ORES, so another five-person team dedicated to "ORES-CU" seems a reasonable place to start. They could double that, and still have like $180M left over. Levivich (talk) 22:40, 23 November 2024 (UTC)
@Levivich: Yeah no, lol. $200k each is not a very competitive total compensation, considering that that needs to include benefits, health insurance, etc. This doesn't include their manager or the hefty hardware required to run ML workflows. It doesn't include the legal support required given the data privacy law compliance needed. Capriciously looking at the logs does not count; accessing data of users the foundation cannot reasonably have said to be likely to cause abuse is not permissible. This all aside from the bias and other data quality issues at hand here. You can delude yourself all you want, but nature cannot be fooled. I'm finished arguing with you anyways, because this proposal is either way dead on arrival.--Jasper Deng (talk) 23:45, 23 November 2024 (UTC)
@Jasper Deng, haggling over the math here isn't really important. You could quintuple the figures @Levivich gave and the Foundation would still have millions upon millions of dollars left over. -- asilvering (talk) 23:48, 23 November 2024 (UTC)
@Asilvering: The point I'm making is Levivich does not understand the complexity behind this kind of thing and thus his arguments are not to be given weight by the closer. Jasper Deng (talk) 23:56, 23 November 2024 (UTC)
As a statistician/data scientist, @Levivich is correct about the technical side of this—building an ML algorithm to detect sockpuppets would be pretty easy. Duplicate user algorithms like these are common across many websites. For a basic classification task like this (basically an ML 101 homework problem), I think $1 million is about right. As a bonus, the same tools could be used to identify and correct for possible canvasing or brigading, which behaves a lot like sockpuppetry from a statistical perspective. A similar algorithm is already used by Twitter's community notes feature.
IANAL, so I can't comment on the legal side of this, and I can't comment on whether that money would be better-spent elsewhere since I don't know what the WMF budget looks like. Overall though, the technical implementation wouldn't be a major hurdle. – Closed Limelike Curves (talk) 20:44, 24 November 2024 (UTC)
Third-party services like Sift.com provide this kind of algorithm-based account fraud protection as an alternative to building and maintaining internally. czar 23:41, 24 November 2024 (UTC)
Building such a model is only a small part of a real production system. If this system is to operate on all account creations, it needs to be at least as reliable as the existing systems that handle account creations. As you probably know, data scientists developing such a model need to be supported by software engineers and site reliability engineers supporting the actual system. Then you have the problem of new sockers who are not on the list of sockmasters to check against. Non-English-language speakers often would be put at a disadvantage too. It's not as trivial as you make it out to be, thus I stand by my estimate.--Jasper Deng (talk) 06:59, 25 November 2024 (UTC)
None of you have accounted for Hofstadter's law.
I don't think we need to spend more time speculating about a system that WMF Legal is extremely unlikely to accept. Even if they did, it wouldn't exist until several years from now. Instead, let's try to think of things that we can do ourselves, or with only a very little assistance. Small, lightweight projects with full community control can help us now, and if we prove that ____ works, the WMF might be willing to adopt and expand it later. WhatamIdoing (talk) 23:39, 25 November 2024 (UTC)
That's a mistake -- doing the same thing Misplaced Pages has been doing for 20+ years. The mistake is in leaving it to volunteers to catch sockpuppetry, rather than insisting that the WMF devote significant resources to it. And it's a mistake because the one thing we volunteers can't do, that the WMF can do, is comb through the server logs looking for patterns. Levivich (talk) 23:44, 25 November 2024 (UTC)
Not sure about the "building an ML algorithm to detect sockpuppets would be pretty easy" part, but I admire the optimism. It is certainly the case that it is possible, and people have done it with a surprising level of success a very long time ago in ML terms e.g. https://doi.org/10.1016/j.knosys.2018.03.002. These projects tend to rely on the category graph to distinguish sock and non-sock sets for training, the categorization of accounts as confirmed or suspected socks. However, the category graph is woefully incomplete i.e. there is information in the logs that is not reflected in the graph, so ensuring that all ban evasion accounts are properly categorized as such might help a bit. Sean.hoyland (talk) 03:58, 26 November 2024 (UTC)
Thankfully, we wouldn't have to build an ML algorithm, we can just use one of the existing ones. Some are even open source. Or WMF could use a third party service like the aforementioned sift.com. Levivich (talk) 16:17, 26 November 2024 (UTC)
Let me guess: Essentially, you would like their machine-learning team to use Sift's AI-Powered Fraud Protection, which from what I can glance, handles safeguarding subscriptions to defending digital content and in-app purchases and helps businesses reduce friction and stop sophisticated fraud attacks that gut growth, to provide the ability for us to automatically checkuser all active users? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:25, 26 November 2024 (UTC)
The WMF already has the ability to "automatically checkuser all users" (the verb "checkuser" just means "look at the server logs"), I'm suggesting they use it. And that they use it in a sophisticated way, employing (existing, open source or commercially available) AI/ML technologies, like the same kind we already use to automatically revert vandalism. Contrary to claims here, doing so would not be illegal or even expensive (comparatively, for the WMF). Levivich (talk) 16:40, 26 November 2024 (UTC)
So, in my attempt to get things set right and steer towards a consensus that is satisfactory, I sincerely follow-up: What lies beyond that in this vast, uncharted sea? And could this mean any more in the next 5 years? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:49, 26 November 2024 (UTC)
What lies beyond is mw:Extension:SimilarEditors. Levivich (talk) 17:26, 26 November 2024 (UTC)
So, @2601AC47, I think the answer to your question is "tell the WMF we really, really, really would like more attention to sockpuppetry and IP abuse from the ML team". -- asilvering (talk) 17:31, 26 November 2024 (UTC)
Which I don't suppose someone can at the next board meeting on December 11? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 18:00, 26 November 2024 (UTC)
I may also point to this, where they mention development in other areas, such as social media features and machine learning expertise. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 16:36, 26 November 2024 (UTC)
e.g. m:Research:Sockpuppet_detection_in_Wikimedia_projects Sean.hoyland (talk) 17:02, 26 November 2024 (UTC)
And that mentions Socksfinder, still in beta it seems. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 17:10, 26 November 2024 (UTC)
3 days! When I first posted my comment and some editors responded that I didn't know what I was talking about, it can't be done, it'd violate the privacy policy and privacy laws, WMF Legal would never allow it... I was wondering how long it would take before somebody pointed out that this thing that can't be done has already been done and has been under development for at least 7 years now.
Of course it's already under development, it's pretty obvious that the same Misplaced Pages that developed ClueBot, one of the world's earlier and more successful examples of ML applications, would try to employ ML to fight multiple-account abuse. I mean, I'm obviously not gonna be the first person to think of this "innovation"!
Anyway, it took 3 days. Thanks, Sean! Levivich (talk) 17:31, 26 November 2024 (UTC)
Unlike what is being proposed, SimilarEditors only works based on publicly available data (e.g. similarities in editing patterns), and not IP data. To quote the page Sean linked, in the model's current form, we are only considering public data, but most saliently private data such as IP addresses or user-agent information are features currently used by checkusers that could be later (carefully) incorporated into the models.So, not only the current model doesn't look at IP data, the research project also acknowledges that actually using such data should only be done in a "careful" way, because of those very same privacy policy issues quoted above.On the ML side, however, this does proves that it's being worked on, and I'm honestly not surprised at all that the WMF is working on machine learning-based tools to detect sockpuppets. Chaotic Enby (talk · contribs) 17:50, 26 November 2024 (UTC)
Right. We should ask WMF to do the later (carefully) incorporated into the models part (especially since it's now later). BTW, the SimilarUsers API already pulls IP and other metadata. SimilarExtensions (a tool that uses the API) doesn't release that information to CheckUsers, by design. And that's a good thing, we can't just release all IPs to CheckUsers, it does indeed have to be done carefully. But user metadata can be used. What I'm suggesting is that the WMF should proceed to develop these types of tools (including the careful use of user metadata). Levivich (talk) 17:57, 26 November 2024 (UTC)
Not really clear that they're pulling IP data from logged-in users. The relevant sections reads:

USER_METADATA (203MB): for every user in COEDIT_DATA, this contains basic metadata about them (total number of edits in data, total number of pages edited, user or IP, timestamp range of edits).

This reads like they're collecting the username or IP depending on whether they're a logged-in user or an IP user. Chaotic Enby (talk · contribs) 18:14, 26 November 2024 (UTC)
In a few years people might look back on these days when we only had to deal with simple devious primates employing deception as the halcyon days. Sean.hoyland (talk) 18:33, 26 November 2024 (UTC)
I assumed 1 million USD/year was accounting for Hofstadter's law several times over. Otherwise it feels wildly pessimistic. – Closed Limelike Curves (talk) 15:57, 26 November 2024 (UTC)
IP range 2600:1700:69F1:1410:0:0:0:0/64 blocked by a CU
The following discussion has been closed. Please do not modify it.
Why do you guys hate the WMF so much? If it weren’t for them, you wouldn’t have this website at all. 2600:1700:69F1:1410:5D40:53D:B27E:D147 (talk) 23:51, 28 November 2024 (UTC)
We don’t. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 01:13, 29 November 2024 (UTC)
Then why do you guys always whine and complain about how incompetent they are and how much money they make and are actively against their donation drives? 2600:1700:69F1:1410:6DF5:851F:7413:CA3B (talk) 01:29, 29 November 2024 (UTC)
We don't. Levivich (talk) 02:47, 29 November 2024 (UTC)
Don’t “we don’t” me again. 2600:1700:69F1:1410:C812:78B7:C08A:5AA5 (talk) 03:11, 29 November 2024 (UTC)
This may be surprising, but it turns out there's more than one person on Misplaced Pages, and many of us have different opinions on things. You're probably thinking of @Guy Macon's essay.
I disagree with his argument that the WMF is incompetent, but at the same time, smart thinking happens on the margin. Just because the WMF spent their first $20 million extremely well (on creating Misplaced Pages) doesn't mean giving them $200 million would make them 10× as good. Nobody here thinks the WMF budget should be cut to $0; there's just some of us who think it needs a haircut.
For me it comes down to, "if you don't donate to the WMF, what does that money go instead"? I'd rather you give that money to some other charity—feeding African children is more important than reskinning Misplaced Pages—but if you won't, I'd doubt giving it to the WMF is worse than whatever else you were going to spend it on. Whether we should cut back on ads depends on whether this money is coming out of donors' charity budgets or their regular budgets. – Closed Limelike Curves (talk) 03:10, 29 November 2024 (UTC)
I already struggle enough with prioritizing charities and whether which ones are ethical or not and how I should be spending every single penny I get on charities dealing with PIA and trans issues because those are the most oppressed groups in the world right now. The WMF is not helping people who are actively getting killed and having their rights taken away therefore they are not important. 2600:1700:69F1:1410:C812:78B7:C08A:5AA5 (talk) 03:15, 29 November 2024 (UTC)
In that case, I'd suggest checking out GiveWell, which has some very good recommendations. That said, this subthread feels wildly off-topic. – Closed Limelike Curves (talk) 03:33, 29 November 2024 (UTC)
So goes this whole discussion; but to give a slightly longer answer to the IP: We’re not telling them to get lost on a different path, we’re trying (despite everything) to establish relations, consensus and mutual trust. And hopefully long-term progress on key areas of contention. We don’t hate them, or else they’ll dismiss us completely. 2601AC47 (talk·contribs·my rights) Isn't a IP anon 03:44, 29 November 2024 (UTC)
Any such system would be subject to numerous biases or be easily defeatable. Such an automated anti-abuse system would have to be exclusively a foundation initiative as only they have the resources for such a monumental undertaking. It would need its own team of developers.--Jasper Deng (talk) 18:57, 23 November 2024 (UTC)

Absolutely no chance that this would pass. WP:SNOW, even though there isn't a flood of opposes. There are two problems:

  1. The existing CheckUser team barely has the bandwidth for the existing SPI load. Doing this on every single new user would be impractical and would enable WP:LTA's by diverting valuable CheckUser bandwidth.
  2. Even if we had enough CheckUser's, this would be a severe privacy violation absolutely prohibited under the Foundation privacy policy.

The vast majority of vandals and other disruptive users don't need CU involvement to deal with. There's very little to be gained from this.--Jasper Deng (talk) 18:36, 23 November 2024 (UTC)

It is perhaps an interesting conversation to have but I have to agree that it is unworkable, and directly contrary to foundation-level policy which we cannot make a local exemption to. En.wp, I believe, already has the largest CU team of any WMF project, but we would need hundreds more people on that team to handle something like this. In the last round of appointments, the committee approved exactly one checkuser, and that one was a returning former mamber of the team. And there is the very real risk that if we appointed a whole bunch of new CUs, some of them would abuse the tool. Just Step Sideways 18:55, 23 November 2024 (UTC)
And its worth pointing out that the Committee approving too few volunteers for Checkuser (regardless of whether you think they are or aren't) is not a significant part of this issue. There simply are not tens of people who are putting themselves forward for consideration as CUs. Since 2016 54 applications (an average of per year) have been put forward for consideration by Functionaries (the highest was 9, the lowest was 2). Note this is total applications not applicants (more than one person has applied multiple times), and is not limited to candidates who had a realistic chance of being appointed. Thryduulf (talk) 20:40, 23 November 2024 (UTC)
The dearth of candidates has for sure been an ongoing thing, it's worth reminding admins that they don't have to wait for the committee to call for candidates, you can put your name forward at any time by emailing the committee. Just Step Sideways 23:48, 24 November 2024 (UTC)
Generally, I tend to get the impression from those who have checkuser rights that CU should be done as a last resort, and other, less invasive methods are preferred, and it would seem that indiscriminate use of it would be a bad idea, so I would have some major misgivings about this proposal. And given the ANI case, the less user information that we retain, the better (which is also probably why temporary accounts are a necessary and prudent idea despite other potential drawbacks). Abzeronow (talk) 03:56, 23 November 2024 (UTC)
Oppose. A lot has already been written on the unsustainable workload for the CU team this would create and the amount of collateral damage; I'll add in the fact that our most notorious sockmasters in areas like PIA already use highly sophisticated methods to evade CU detection, and based on what I've seen at the relevant SPIs most of the blocks in these cases are made with more weight given to the behaviour, and even then only after lengthy deliberations on the matter. These sort of sockmasters seem to have been in the OP's mind when the request was made, and I do not see automated CU being of any more use than current techniques against such dedicated sockmasters. And, has been mentioned before, most cases of sockpuppetry (such as run-of-the-mill vandals and trolls using throwaway accounts for abuse) don't need CU anyways. JavaHurricane 08:17, 24 November 2024 (UTC)
These are, unfortunately, fair points about the limits of CU and the many experienced and dedicated ban evading actors in PIA. CU information retention policy is also a complicating factor. Sean.hoyland (talk) 08:28, 24 November 2024 (UTC)
As I said in my original post, recidivist socks often get better at covering their "tells" each time making behavioural detection increasingly difficult and meaning the entire burden falls on the honest user to convince an Admin to take an SPI case seriously with scarce evidence. After many years I'm tired of defending various pages from sock POV edits and if WMF won't make life easier then increasingly I just won't bother, I'm sure plenty of other users feel the same way. Mztourist (talk) 05:45, 26 November 2024 (UTC)

SimilarEditors

The development of mw:Extension:SimilarEditors -- the type of tool that could be used to do what Mztourist suggests -- has been "stalled" since 2023 and downgraded to low-priority in 2024, according to its documentation page and related phab tasks (see e.g. phab:T376548, phab:T304633, phab:T291509). Anybody know why? Levivich (talk) 17:43, 26 November 2024 (UTC)

Honestly, the main function of that sort of thing seems to be compiling data that is already available on XTools and various editor interaction analyzers, and then presenting it nicely and neatly. I think that such a page could be useful as a sanity check, and it might even be worth having that sort of thing as a standalone toolforge app, but I don't really see why the WMF would make that particular extension a high priority. — Red-tailed hawk (nest) 17:58, 26 November 2024 (UTC)
Well, it doesn't have to be that particular extension, but it seems to me that the entire "idea" has been stalled, unless they're working on another tool that I'm unaware of (very possible). (Or, it could be because of recent changes in domestic and int'l privacy laws that derailed their previous development advances, or it could be because of advancements in ML elsewhere making in-house development no longer practical.)

As to why the WMF would make this sort of problem a high priority, I'd say because the spread of misinformation on Misplaced Pages by sockpuppets is a big problem. Even without getting into the use of user metadata, just look at recent SPIs I filed, like Misplaced Pages:Sockpuppet investigations/Icewhiz/Archive#27 August 2024 and Misplaced Pages:Sockpuppet investigations/Icewhiz/Archive#09 October 2024. That involved no private data at all, but a computer could have done automatically, in seconds, what took me hours to do manually, and those socks could have been uncovered before they made thousands and thousands of edits spreading misinformation. If the computer looked at private data as well as public data, it would be even more effective (and would save CUs time as well). Seems to me to be a worthy expenditure of 0.5% or 1% of the WMF's annual budget. Levivich (talk) 18:09, 26 November 2024 (UTC)

This looks really interesting. I don't really know how extensions are rolled out to individual wikis - can anyone with knowledge about that summarise if having this tool turned on (for check users/relevant admins) for en.wp is feasible? Do we need a RFC, or is this a "maybe wait several years for a phab ticket" situation? BugGhost🦗👻 18:09, 26 November 2024 (UTC)
I find it amusing that ~4 separate users above are arguing that automatic identification of sockpuppets is impossible, impractical, and the WMF would never do it—and meanwhile, the WMF is already doing it. – Closed Limelike Curves (talk) 19:29, 27 November 2024 (UTC)
So, discussion is over? 2601AC47 (talk·contribs·my rights) Isn't a IP anon 19:31, 27 November 2024 (UTC)
I think what's happening is that people are having two simultaneous discussions – automatic identification of sockpuppets is already being done, but what people say "the WMF would never do" is using private data (e.g. IP addresses) to identify them. Which adds another level of (ethical, if not legal) complications compared to what SimilarEditors is doing (only processing data everyone can access, but in an automated way). Chaotic Enby (talk · contribs) 07:59, 28 November 2024 (UTC)
"automatic identification of sockpuppets is already being done" is probably an overstatement, but I agree that there may be a potential legal and ethical minefield between the Similarusers service that uses public information available to anyone from the databases after redaction of private information (i.e. course-grained sampling of revision timestamps combined with an attempt to quantify page intersection data), and a service that has access to the private information associated with a registered account name. Sean.hoyland (talk) 11:15, 28 November 2024 (UTC)
The WMF said they're planning on incorporating IP addresses and device info as well! – Closed Limelike Curves (talk) 21:21, 29 November 2024 (UTC)
Yes, automatic identification of (these) sockpuppets is impossible. There are many reasons for this, but the simplest one is this: These types of tools require hundreds of edits – at minimum – to return any viable data, and the sort of sockmasters who get accounts up to that volume of edits know how to evade detection by tools that analyse public information. The markers would likely indicate people from similar countries – naturally, two Cypriots would be interested in Category:Cyprus and over time similar hour and day overlaps will emerge, but what's to let you know whether these are actual socks when they're evading technical analysis? You're back to square one. There are other tools such as mediawikiwiki:User:Ladsgroup/masz which I consider equally circumstantial; an analysis of myself returns a high likelihood of me being other administrators and arbitrators, while analysing an alleged sock currently at SPI returns the filer as the third most likely sockmaster. This is not commentary on the tools themselves, but rather simply the way things are. DatGuyContribs 17:42, 28 November 2024 (UTC)
Oh, fun! Too bad it's CU-restricted, I'm quite curious to know what user I'm most stylometrically similar to. -- asilvering (talk) 17:51, 28 November 2024 (UTC)
That would be LittlePuppers and LEvalyn. DatGuyContribs 03:02, 29 November 2024 (UTC)
Fascinating! One I've worked with, one I haven't, both AfC reviewers. Not bad. -- asilvering (talk) 06:14, 29 November 2024 (UTC)
Idk, the half dozen ARBPIA socks I recently reported at SPI were obvious af to me, as are several others I haven't reported yet. That may be because that particular sockfarm is easy to spot by its POV pushing and a few other habits; though I bet in other topic areas it's the same. WP:ARBECR helps because it forces the socks to make 500 edits minimum before they can start POV pushing, but still we have to let them edit for a while post-XC just to generate enough diffs to support an SPI filing. Software that combines tools like Masz and SimilarEditor, and does other kinds of similar analysis, could significantly reduce the amount of editor time required to identify and report them. Levivich (talk) 18:02, 28 November 2024 (UTC)
I think it is possible, studies have demonstrated that it is possible, but it is true that having a sufficient number of samples is critical. Samples can be aggregated in some cases. There are several other important factors too. I have tried some techniques, and sometimes they work, or let's say they can sometimes produce results consistent with SPI results, better than random, but with plenty of false positives. It is also true that there are a number of detection countermeasures (that I won't describe) that are already employed by some bad actors that make detection harder. But I think the objective should be modest, to just move a bit in the right direction by detecting more ban evading accounts than are currently detected, or at least to find ways to reduce the size of the search space by providing ban evasion candidates. Taking the human out of the detection loop might take a while. Sean.hoyland (talk) 18:39, 28 November 2024 (UTC)
If you mean it's never going to be possible to catch some sockpuppets—the best-hidden, cleverest, etc. ones—you're completely correct. But I'm guessing we could cut the amount of time SPI has to spend dramatically with just some basic checks. – Closed Limelike Curves (talk) 02:27, 29 November 2024 (UTC)
I disagree. Empirically, the vast majority of time spent at SPI is not on finding possible socks, nor is it using the CheckUser tool on them, but rather it's the CU completed cases (of which there are currently 14 and I should probably stop slacking and get onto some) with non-definitive technical results waiting on an administrator to make the final determination on whether they're socks or not. Extension:SimilarUsers would concentrate various information that already exists (EIA, RoySmith's SPI tools) in one place, but I wouldn't say the accessibility of these tools is a cause of SPI backlog. An AI analysis tool to give an accurate magic number for likelihood? I'm anything but a Luddite, but still believe that's wishful thinking. DatGuyContribs 03:02, 29 November 2024 (UTC)
Something seems better than nothing in this context doesn't it? EIA and the Similarusers service don't provide an estimate of the significance of page intersections. An intersection on a page with few revisions or few unique actors or few pageviews etc. is very different from a page intersection on the Donald Trump page. That kind of information is probably something that could sometimes help, even just to evaluate the importance of intersection evidence presented at SPIs. It seems to me that any kind of assistance could help. And another thing about the number of edits is that too many samples can also present challenges related to noise, with signals getting smeared out, although the type of noise in a user's data can itself be a characteristic signal in some cases it seems. And if there are too few samples, you can generate synthetic samples based on the actual samples and inject them into spaces. Search strategy matters a lot. The space of everyone vs everyone is vast, so good luck finding potential matches in that space without a lot of compute, especially for diffs. But many socks inhabit relatively small subspaces of Misplaced Pages, at least in the 20%-ish of time (on average in PIA) they edit(war)/POV-push etc. in their topic of interest. So, choosing the candidate search space and search strategy wisely can make the problem much more tractable for a given topic area/subspace. Targeted fishing by picking a potential sock and looking for potential matches (the strategy used by the Similarusers service and CU I guess) is obviously a very different challenge than large-scale industrial fishing for socks in general. Sean.hoyland (talk) 04:08, 29 November 2024 (UTC)
And to continue the whining about existing tools, EIA and the Similarusers service use a suboptimal strategy in my view. If the objective is page intersection information for a potential sock against a sockmaster, and a ban evasion source has employed n identified actors so far e.g. almost 50 accounts for Icewhiz, the source's revision data should be aggregated for the intersection. This is not difficult to do using the category graph and the logs. Sean.hoyland (talk) 04:25, 29 November 2024 (UTC)
There is so much more that could be done with the software. EIA gives you page overlaps (and isn't 100% accurate at it), but it doesn't tell you:
  • how many times the accounts made the same edits (tag team edit warring)
  • how many times they voted in the same formal discussions (RfC, AfD, RM, etc) and whether they voted the same way or different (vote stacking)
  • how many times they use the same language and whether they use unique phraseology
  • whether they edit at the same times of day
  • whether they edit on the same days
  • whether account creation dates (or start-of-regular-editing dates) line up with when other socks were blocked
  • whether they changed focus after reaching XC and to what extent (useful in any ARBECR area)
  • whether they "gamed" or "rushed" to XC (same)
All of this (and more) would be useful to see in a combined way, like a dashboard. It might make sense to restrict access to such compilations of data to CUs, and the software could also throw in metadata or subscriber info in there, too (or not), and it doesn't have to reduce it all into a single score like ORES, but just having this info compiled in one place would save editors the time of having to compile it manually. If the software auto-swept logs for this info and alerted humans to any "high scores" (however defined, eg "matches across multiple criteria"), it would probably not only reduce editor time but also increase sock discovery. Levivich (talk) 04:53, 29 November 2024 (UTC)
This is like one of my favorite strategies for meetings. Propose multiple things, many of which are technically challenging, then just walk out of the meeting.
The 'how many times the accounts made the same edits' is probably do-able because you can connect reverted revisions to the revisions that reverted them using json data in the database populated as part of the tagging system, look at the target state reverted to and whether the revision was an exact revert. ...or maybe not without computing diffs, having just looked at an article with a history of edit warring. Sean.hoyland (talk) 07:43, 29 November 2024 (UTC)
I agree with Levivich that automated, privacy-protecting sock-detection is not a pipe dream. I proposed a system something like this in 2018, see also here, and more recently here. However, it definitely requires a bit of software development and testing. It also requires the community and the foundation devs or product folks to prioritize the idea. Andre🚐 02:27, 10 December 2024 (UTC)
  • Comment. For some time I have vehemnently suspected that this site is crawling with massive numbers of sockpuppets, that the community seems to be unable or unwilling to recognise probable sockpuppets for what they are, and it is not feasible to send them to SPI one at a time. I see a large number of accounts that are sleepers, or that have low edit counts, trying to do things that are controversial or otherwise suspicious. I see them showing up at discussions in large numbers and in quick succession, and offering !votes consist of interpretations of our policies and guidelines that may not reflect consensus, or other statements that may not be factually accurate.
I think the solution is simple: when closing community discussions, admins should look at the edit count of each !voter when determining how much weight to give his !vote. The lower the edit count, the greater the level of sleeper behaviour, and the more controversial the subject of the discussion is amongst the community, the less weight should be given to !vote.
For example, if an account with less than one thousand edits !votes in a discussion about 16th century Tibetan manuscripts, we may well be able to trust that !vote, because the community does not care about such manuscripts. But if the same account !votes on anything connected with "databases" or "lugstubs", we should probably give that !vote very little weight, because that was the subject of a massive dispute amongst the community, and any discussion on that subject is not particulary unlikely to be crawling with socks on both sides. The feeling is that, if you want to be taken seriously in such a controversial discussion, you need to make enough edits to prove that you are a real person, and not a sock. James500 (talk) 15:22, 12 December 2024 (UTC)
The site presumably has a large number of unidentified sockpuppets. As for the identified ban evading accounts, accounts categorized or logged as socks, if you look at 2 million randomly selected articles for the 2023-10-07 to 2024-10-06 year, just under 2% of the revisions are by ban evading actors blocked for sockpuppetry (211,546 revisions out of 10,732,361). A problem with making weight dependent on edit count is that the edit count number does not tell you anything about the probability that an account is a sock. Some people use hundreds of disposable accounts, making just a few edits with each account. Others stick around and make thousands of edits before they are detected. Also, Misplaced Pages provides plenty of tools that people can use to rapidly increase their edit count. Sean.hoyland (talk) 16:12, 12 December 2024 (UTC)
Can I ask why? Is it a privacy-based concern? IPs are automatically collected and stored for 90 days, and maybe for years in the backups, regardless of CUs. That's a 90 day window that a machine could use to do something with them without anyone running a CU and without anyone having to see what the machine sees. Sean.hoyland (talk) 15:05, 15 December 2024 (UTC)
Primarily privacy concerns, as well as concerns about false positives. A lot of people here probably share an IP with other editors without even knowing it. I also would like to maintain my personal privacy, and I know many other editors would too. There are other methods of fighting sockpuppets that don't have as much collateral damage, and we should pursue those instead. QuicoleJR (talk) 15:16, 17 December 2024 (UTC)
Also, it wouldn't even work on some sockpuppets, because IP info is only retained for 90 days, so a blocked editor could just wait out the 90 days and then return with a new account. QuicoleJR (talk) 15:19, 17 December 2024 (UTC)
@Levivich—one situation where I think we could pull a lot of data, and probably detect tons of sockpuppets, is !votes like RfAs and RfCs. Those have a lot of data, in addition to a very strong incentive for socking—you'd expect to see a bimodal distribution where most accounts have moderately-correlated views, but a handful have extremely strong-correlations (always !voting the same way), more than could plausibly happen by chance or by overlapping views. For accounts in the latter group, we'd have strong grounds to suspect collusion/canvassing or socking.
RfAs are already in a very nice machine-readable format. RfCs aren't, but most could easily be made machine-readable (by adopting a few standardized templates). We could also build a tool for semi-automated recoding of old RfCs to get more data. – Closed Limelike Curves (talk) 18:56, 16 December 2024 (UTC)
Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors. Those are very well-attended. The same may apply to many RfCs. Perhaps the less well-attended ones might be affected, but the main challenge is article edits, which will not be similarly structured. CMD (talk) 19:13, 16 December 2024 (UTC)

Would that data help with the general problem? If there are a lot of socks on an RfA, I'd expect that to be picked up by editors.

Given we've had situations of sockpuppets being made admins themselves, I'm not too sure of this myself. If someone did create a bunch of socks, as some people have alleged in this thread, it'd be weird of them not to use those socks to influence policy decisions. I'm pretty skeptical, but I do think investigating would be a good idea (if nothing else because of how important it is—even the possibility of substantial RfA/RfC manipulation is quite bad, because it undermines the whole idea of consensus). – Closed Limelike Curves (talk) 21:04, 16 December 2024 (UTC)
RFAs, RfCs, RMs, AfDs, and arbcom elections. Levivich (talk) 23:11, 17 December 2024 (UTC)

What do we do with this information?

I think we've put the cart before the horse here a bit. While we've established it's possible to detect most sockpuppets automatically—and the WMF is already working on it—it's not clear what this would actually achieve, because having multiple accounts isn't against the rules. I think we'd need to establish a set of easy-to-enforce boundaries for people using multiple accounts. My proposal is to keep it simple—two accounts controlled by the same person can't edit the same page (or participate in the same discussion) without disclosing they're the same editor.– Closed Limelike Curves (talk) 04:41, 14 December 2024 (UTC)

This is already covered by WP:LEGITSOCK I think. Andre🚐 05:03, 14 December 2024 (UTC)
And as there are multiple legitimate ways to disclose, not all of which are machine readable, any automatically generated list is going to need human review. Thryduulf (talk) 10:13, 14 December 2024 (UTC)
Yes, that's definitely the case, an automatic sock detection should probably never be an autoblock, or at least not unless there is a good reason in that specific circumstance, like a well-trained filter for a specific LTA. Having the output of automatic sock detection should still be restricted to CU/OS or another limited user group who can be trusted to treat possible user-privacy-related issues with discretion, and have gone through the appropriate legal rigmarole. There could also be some false positives or unusual situations when piloting a program like this. For example, I've seen dynamic IPs get assigned to someone else after a while, which is unlikely but not impossible depending on how an ISP implements DHCP, though I guess collisions become less common with IPV6. Or if the fingerprinting is implemented with a lot of datapoints to reduce the likelihood of false positives. Andre🚐 10:31, 14 December 2024 (UTC)
I think we are probably years away from being able to rely on autonomous agents to detect and block socks without a human in the loop. For now, people need as much help as they can get to identify ban evasion candidates. Sean.hoyland (talk) 10:51, 14 December 2024 (UTC)

or at least not unless there is a good reason in that specific circumstance,

Yep, basically I'm saying we need to define "good reason". The obvious situation is automatically blocking socks of blocked accounts. I also think we should just automatically prevent detected socks from editing the same page (ideally make it impossible, to keep it from being done accidentally). – Closed Limelike Curves (talk) 17:29, 14 December 2024 (UTC)

Requiring registration for editing

information Note: This section was split off from "CheckUser for all new users" (permalink) and the "parenthetical comment" referred to below is: (Also, email-required registration and get rid of IP editing.)—03:49, 26 November 2024 (UTC)

@Levivich, about your parenthetical comment on requiring registration:

Part of the eternally unsolvable problem is that new editors are frankly bad at it. I can give examples from my own editing: Create an article citing a personal blog post as the main source? Check. Merge two articles that were actually different subjects? Been there, done that, got the revert. Misunderstand and mangle wikitext? More times than I can count. And that's after I created my account. Like about half of experienced editors, I edited as an IP first, fixing a typo here or reverting some vandalism there.

But if we don't persist through these early problems, we don't get experienced editors. And if we don't get experienced editors, Misplaced Pages will die.

Requiring registration ("get rid of IP editing") shrinks the number of people who edit. The Portuguese Misplaced Pages banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. The number of contributions has dropped, too. They went from 160K to 210K edits per month down to 140K most months.

Some of the experienced editors have said that they like this. No IPs means less impulsive vandalism, and the talk pages are stable if you want to talk to the editor. Fewer newbies means I don't "have to" clean up after so many mistake-makers! Fewer editors, and especially fewer inexperienced editors, is more convenient – in the short term. But I wonder whether they're going to feel the same way a decade from now, when their community keeps shrinking, and they start wondering when they will lose critical mass.

The same thing happens in the real world, by the way. Businesses want to hire someone with experience. They don't want to train the helpless newbie. And then after years of everybody deciding that training entry-level workers is Somebody else's problem, they all look around and say: Where are all the workers that I need? Why didn't someone else train the next generation while I was busy taking the easy path?

In case you're curious, there is a Misplaced Pages that puts all of the IP and newbie edits under "PC" type restrictions. Nobody can see the edits until they've been approved by an experienced editor. The rate of vandalism visible to ordinary readers is low. Experienced editors love the level of control they have. Have a look at what's happened to the size of their community during the last decade. Is that what you want to see here? If so, we know how to make that happen. The path to that destination even looks broad, easy, and paved with all kinds of good intentions. WhatamIdoing (talk) 04:32, 23 November 2024 (UTC)

Size isn't everything... what happened to their output--the quality of their encyclopedias--after they made those changes? Levivich (talk) 05:24, 23 November 2024 (UTC)
Well, I can tell you objectively that the number of edits declined, but "quality" is in the eye of the beholder. I understand that the latter community has the lowest use of inline citations of any mid-size or larger Misplaced Pages. What's now yesterday's TFA there wouldn't even be rated B-class here due to whole sections not having any ref tags. In terms of citation density, their FA standard is currently where ours was >15 years ago.
But I think you have missed the point. Even if the quality has gone up according to the measure of your choice, if the number of contributors is steadily trending in the direction of zero, what will the quality be when something close to zero is reached? That community has almost halved in the last decade. How many articles are out of date, or missing, because there simply aren't enough people to write them? A decade from now, with half as many editors again, how much worse will the articles be? We're none of us idiots here. We can see the trend. We know that people die. You have doubtless seen this famous line:

All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

I say:

All Misplaced Pages editors are mortal. Dead editors do not maintain or improve Misplaced Pages articles. Therefore, maintaining and improving Misplaced Pages requires editors who are not dead.

– and, memento mori, we are going to die, my friend. I am going to die. If we want Misplaced Pages to outlive us, we cannot be so shortsighted as to care only about the quality today, and never the quality the day after we die. WhatamIdoing (talk) 06:13, 23 November 2024 (UTC)
Trends don't last forever. Enwiki's active user count decreased from its peak over a few years, then flattened out for over a decade. The quality increased over that period of time (by any measure). Just because these other projects have shed users doesn't mean they're doomed to have zero users at some point in the future. And I think there's too many variables to know how much any particular change made on a project affects its overall user count, nevermind the quality of its output. Levivich (talk) 06:28, 23 November 2024 (UTC)
If the graph to the right accurately reflects the age distribution of Misplaced Pages users, then a large chunk of the user base will die off within the next decade or two. Not to be dramatic, but I agree that requiring registration to edit, which will discourage readers from editing in the first place, will hasten the project's decline.... Some1 (talk) 14:40, 23 November 2024 (UTC)
😂 Seriously? What do you suppose that chart looked like 20 years ago, and then what happened? Levivich (talk) 14:45, 23 November 2024 (UTC)
There are significantly more barriers to entry than there were 20 years ago, and over that time the age profile has increased (quite significantly iirc). Adding more barriers to entry is not the way to solve the issued caused by barriers to entry. Thryduulf (talk) 15:50, 23 November 2024 (UTC)
"PaperQA2 writes cited, Misplaced Pages style summaries of scientific topics that are significantly more accurate than existing, human-written Misplaced Pages articles" - maybe the demographics of the community will change. Sean.hoyland (talk) 16:30, 23 November 2024 (UTC)
That talks about LLMs usage in artcles, not the users. 2601AC47 (talk|contribs) Isn't a IP anon 16:34, 23 November 2024 (UTC)
Or you could say it's about a user called PaperQA2 that writes Misplaced Pages articles significantly more accurate than articles written by other users. Sean.hoyland (talk) 16:55, 23 November 2024 (UTC)
No, it is very clearly about a language model. As far as I know, PaperQA2, or WikiCrow (the generative model using PaperQA2 for question answering), has not actually been making any edits on Misplaced Pages itself. Chaotic Enby (talk · contribs) 16:58, 23 November 2024 (UTC)
That is true. It is not making any edits on Misplaced Pages itself. There is a barrier. But my point is that in the future that barrier may not be there. There may be users like PaperQA2 writing articles better than other users and the demographics will have changed to include new kinds of users, much younger than us. Sean.hoyland (talk) 17:33, 23 November 2024 (UTC)
And who will never die off! Levivich (talk) 17:39, 23 November 2024 (UTC)
But which will not be Misplaced Pages. WhatamIdoing (talk) 06:03, 24 November 2024 (UTC)
In re "What do you suppose that chart looked like 20 years ago": I believe that the numbers, very roughly, are that the community has gotten about 10 years older, on average, than it was 20 years ago. That is, we are bringing in some younger people, but not at a rate that would allow us to maintain the original age distribution. (Whether the original age distribution was a good thing is a separate consideration.) WhatamIdoing (talk) 06:06, 24 November 2024 (UTC)
I like looking at the en.wikipedia user retention graph hosted on Toolforge (for anyone who might go looking for it later, there's a link to it at Misplaced Pages:WikiProject Editor Retention § Resources). It shows histograms of how many editors have edited in each month, grouped by all the editors who started editing in the same month. The data is noisy, but it does seem to show an increase in editing tenure since 2020 (when the pursuit of a lot of solo hobbies picked up, of course). Prior to that, there does seem to be a bit of slow growth in tenure length since the lowest point around 2013. isaacl (talk) 17:18, 23 November 2024 (UTC)
The trend is a bit clearer when looking at the retention graph based on those who made at least 10 edits in a month. (To see the trend when looking at the retention graph based on 100 edits in a month, the default colour range needs to be shifted to accommodate the smaller numbers.) isaacl (talk) 17:25, 23 November 2024 (UTC)
I'd say that the story there is: Something amazing happened in 2006. Ours (since both of us registered our accounts that year) was the year from which people stuck around. I think that would be just about the time that the wall o' automated rejection really got going. There was some UPE-type commercial pressure, but it didn't feel unmanageable. It looks like an inflection point in retention. WhatamIdoing (talk) 06:12, 24 November 2024 (UTC)
I don't think something particularly amazing happened in 2006. I think the rapid growth in articles starting in 2004 attracted a large land rush of editors as Misplaced Pages became established as a top search result. The cohort of editors at that time had the opportunity to cover a lot of topics for the first time on Misplaced Pages, requiring a lot of co-ordination, which created bonds between editors. As topic coverage grew, there were fewer articles that could be more readily created by generalists, the land rush subsided, and there was less motivation for new editors to persist in editing. Boom-bust cycles are common for a lot of popular things, particularly in tech where newer, shinier things launch all the time. isaacl (talk) 19:07, 24 November 2024 (UTC)
Ah yes, that glorious time when we gained an article on every Pokemon character and, it seems, every actor who was ever credited in a porn movie. Unfortunately, many of the editors I bonded with then are no longer active. Some are dead, some finished school, some presumably burned out, at least one went into the ministry. Donald Albury 23:49, 26 November 2024 (UTC)
Have a look at what happened to the size of their community.—I'm gonna be honest: eyeballing it, I don't actually see much (if any) difference with enwiki's activity. "Look at this graph" only convinces people when the dataset passes the interocular trauma test (e.g. the hockey stick).
On the other hand, if there's a dataset of "when did $LANGUAGEwiki adopt universal pending changes protections", we could settle this argument once and for all using a real statistical model that can deliver precise effect sizes on activity. Maybe then we can all finally drop the stick. – Closed Limelike Curves (talk) 18:08, 26 November 2024 (UTC)
This is requested once or twice a year, and the answer will always be no. You would know this if you read WP:PERENNIAL, as is requested at the top of this page Mgjertson (talk) 08:09, 17 December 2024 (UTC)

This particular idea will not pass, but the binary developing in the discussion is depressing. A bargain where we allow IPs to edit (or unregistered users generally when IPs are masked), and therefore will sit on our hands when dealing with abuse and even harassment is a grim one. Any steps taken to curtail the second half of that bargain would make the first half stronger, and I am generally glad to see thoughts about it, even if they end up being impractical. CMD (talk) 02:13, 24 November 2024 (UTC)

I don't want us to sit on our hands when we see abuse and harassment. I think our toolset is about 20 years out of date, and I believe there are things we could do that will help (e.g., mw:Temporary accounts, cross-wiki checkuser tools for Stewards, detecting and responding to a little bit more information about devices/settings ). WhatamIdoing (talk) 06:39, 24 November 2024 (UTC)
Temporary accounts will help with the casual vandalism, but they’re not going to help with abuse and harassment. If it limits the ability to see ranges, it will make issues slightly worse. CMD (talk) 07:13, 24 November 2024 (UTC)
I'm not sure what the current story is there, but when I talked to the team last (i.e., in mid-2023), we were talking about the value of a tool that would do range-related work. For various reasons, this would probably be Toolforge instead of MediaWiki, and it would probably be restricted (e.g., to admins, or to a suitable group chosen by each community), but the goal was to make it require less manual work, particularly for cross-wiki abuse, and to be able to aggregate some information without requiring direct disclosure of some PII. WhatamIdoing (talk) 23:56, 25 November 2024 (UTC)

Oh look, misleading statistics! "The Portuguese Misplaced Pages banned IPs only from the mainspace three years ago. Have a look at the trend. After the ban went into effect, they had 10K or 11K registered editors each month. It's since dropped to 8K. " Of course you have a spike in new registrations soon after you stop allowing IP edits, and you can't sustain that spike. That is not evidence of anything. It would have been more honest and illustrative to show the graph before and after the policy change, not only afterwards, e.g. thus. Oh look, banning IP editing has resulted in on average some 50% more registered editors than before the ban. Number of active editors is up 50% as well. The number of new pages has stayed the same. Number of edits is down, yes, but how much of this is due to less vandalism / vandalism reverts? A lot apparently, as the count of user edits has stayed about the same. Basically, from those statistics, used properly, it is impossible to detect any issues with the Portuguese Misplaced Pages due to the banning of IP editing. Fram (talk) 08:55, 26 November 2024 (UTC)

"how much of this is due to less vandalism / vandalism reverts?" That's a good question. Do we have some data on this? Jo-Jo Eumerus (talk) 09:20, 26 November 2024 (UTC)
@Jo-Jo Eumerus:, the dashboard is here although it looks like they stopped reporting the data in late 2021. If you take "Number of reverts" as a proxy for vandalism, you can see that the block shifted the number of reverts from a higher equilibrium to a lower one, while overall non-reverted edits does not seem to have changed significantly during that period. CMD (talk) 11:44, 28 November 2024 (UTC)
Upon thinking, it would be useful to know how many good edits are done by IP. Or as is, unreverted edits. Jo-Jo Eumerus (talk) 14:03, 30 November 2024 (UTC)
I agree that one should expect a spike in registration. (In fact, I have suggested a strictly temporary requirement to register – a few hours, even – to push some of our regular IPs into creating accounts.) But once you look past that initial spike, the trend is downward. WhatamIdoing (talk) 05:32, 29 November 2024 (UTC)

But once you look past that initial spike, the trend is downward.

I still don't see any evidence that this downward trend is unusual. Apparently the WMF did an analysis of ptwiki and didn't find evidence of a drop in activity. Net edits (non-revert edits standing for at least 48 hours) increased by 5.7%, although edits across other wikis increased slightly more. The impression I get is any effects are small either way—the gains from freeing up anti-vandalism resources basically offset the cost of some IP editors not signing up.
TBH this lines up with what I'd expect. Very few people I talk to cite issues like "creating an account" as a major barrier to editing Misplaced Pages. The most common barrier I've heard from people who tried editing and gave it up is "Oh, I tried, but then some random admin reverted me, linked me to MOS:OBSCURE BULLSHIT, and told me to go fuck myself but with less expletives." – Closed Limelike Curves (talk) 07:32, 29 November 2024 (UTC)

But once you look past that initial spike, the trend is downward.

Not really obvious, and not more or even less so in Portuguese wikipedia than in e.g. Enwiki, FRwiki, NLwiki, ESwiki, Svwiki... So, once again, these statistics show no issue at all with disabling IP editing on Portuguese Misplaced Pages. Fram (talk) 10:38, 29 November 2024 (UTC)

Aside from the obvious loss of good 'IP' editors, I think there's a risk of unintended consequences from 'stopping vandalism' at all; 'vandalism' and 'disruptive editing' from IP editors (or others) isn't necessarily a bad thing, long term. Even the worst disruptive editors 'stir the pot' of articles, bringing attention to articles that need it, and otherwise would have gone unnoticed. As someone who mostly just trawls through recent changes, I can remember dozens of times when where an IP, or brand new, user comes along and breaks an article entirely, but their edit leads inexorably to the article being improved. Sometimes there is a glimmer of a good point in their edit, that I was able to express better than they were, maybe in a more balanced or neutral way. Sometimes they make an entirely inappropriate edit, but it brings the article to the top of the list, and upon reading it I notice a number of other, previously missed, problems in the article. Sometimes, having reverted a disruptive change, I just go and add some sources or fix a few typos in the article before I go on my merry way. You might think 'Ah, but 'Random article' would let you find those problems too. BUT random article' is, well, random. IP editors are more targeted, and that someone felt the need to disparage a certain person's mother in fact brings attention to an article about someone who is, unbeknownst to us editors, particularly contentious in the world of Czech Jazz Flautists so there is a lot there to fix. By stopping people making these edits, we risk a larger proportion of articles becoming an entirely stagnant. JeffUK 15:00, 9 December 2024 (UTC)

I feel that the glassmaker has been too clever by half here. "Ahh, but vandalism of articles stimulates improvements to those articles." If the analysis ends there, I have no objections. But if, on the other hand, you come to the conclusion that it is a good thing to vandalize articles, that it causes information to circulate, and that the encouragement of editing in general will be the result of it, you will oblige me to call out, "Halt! Your theory is confined to that which is seen; it takes no account of that which is not seen." If I were to pay a thousand people to vandalize Misplaced Pages articles full-time, bringing more attention to them, would I be a hero or villain? If vandalism is good, why do we ban vandals instead of thanking them? Because vandalism is bad—every hour spent cleaning up after a vandal is one not spent writing a new article or improving an existing one.
On targeting: vandals are more targeted than a "random article", but are far more destructive than basic tools for prioritizing content, and less effective than even very basic prioritization tools like sorting articles by total views. – Closed Limelike Curves (talk) 19:11, 9 December 2024 (UTC)
I mean, I only said Vandalism "isn't necessarily a bad thing, long term", I don't think it's completely good, but maybe I should have added 'in small doses', fixing vandalism takes one or two clicks of the mouse in most cases and it seems, based entirely on my anecdotal experience, to sometimes have surprisingly good consequences; intuitively, you wouldn't prescribe vandalism, but these things have a way of finding a natural balance, and what's intuitive isn't necessarily what's right. One wouldn't prescribe dropping asteroids on the planet you're trying to foster life upon after you finally got it going, but we can be pretty happy that it happened! - And 'vandalism' is the very worst of what unregistered (and registered!) users get up to, there are many, many more unambiguously positive contributors than unambiguously malicious. JeffUK 20:03, 9 December 2024 (UTC)

intuitively, you wouldn't prescribe vandalism

Right, and I think this is mainly the intuition I wanted to invoke here—"more vandalism would be good" a bit too galaxy-brained of a take for me to find it compelling without some strong empirical evidence to back it up.
Although TBH, I don't see this as a big deal either way. We already have to review and check IP edits for vandalism; the only difference is whether that content is displayed while we wait for review (with pending changes, the edit is hidden until it's reviewed; without it, the edit is visible until someone reviews and reverts it). This is unlikely to substantially affect contributions (the only difference on the IP's end is they have to wait a bit for their edit to show up) or vandalism (since we already de facto review IP edits). – Closed Limelike Curves (talk) 04:29, 14 December 2024 (UTC)

Revise Misplaced Pages:INACTIVITY

Point 1 of Procedural removal for inactive administrators which currently reads "Has made neither edits nor administrative actions for at least a 12-month period" should be replaced with "Has made no administrative actions for at least a 12-month period". The current wording of 1. means that an Admin who takes no admin actions keeps the tools provided they make at least a few edits every year, which really isn't the point. The whole purpose of adminship is to protect and advance the project. If an admin isn't using the tools then they don't need to have them. Mztourist (talk) 07:47, 4 December 2024 (UTC)

Endorsement/Opposition (Admin inactivity removal)

  • Support as proposer. Mztourist (talk) 07:47, 4 December 2024 (UTC)
  • Oppose - this would create an unnecessary barrier to admins who, for real life reasons, have limited engagement for a bit. Asking the tools back at BN can feel like a faff. Plus, logged admin activity is a poor guide to actual admin activity. In some areas, maybe half of actions aren't logged? —Femke 🐦 (talk) 19:17, 4 December 2024 (UTC)
  • Oppose. First, not all admin actions are logged as such. One example which immediately comes to mind is declining an unblock request. In the logs, that's just a normal edit, but it's one only admins are permitted to make. That aside, if someone has remained at least somewhat engaged with the project, they're showing they're still interested in returning to more activity one day, even if real-life commitments prevent them from it right now. We all have things come up that take away our available time for Misplaced Pages from time to time, and that's just part of life. Say, for example, someone is currently engaged in a PhD program, which is a tremendously time-consuming activity, but they still make an edit here or there when they can snatch a spare moment. Do we really want to discourage that person from coming back around once they've completed it? Seraphimblade 21:21, 4 December 2024 (UTC)
    We could declare specific types of edits which count as admin actions despite being mere edits. It should be fairly simple to write a bot which checks if an admin has added or removed specific texts in any edit, or made any of specific modifications to pages. Checking for protected edits can be a little harder (we need to check for protection at the time of edit, not for the time of the check), but even this can be managed. Edits to pages which match specific regular expression patterns should be trivial to detect. Animal lover |666| 11:33, 9 December 2024 (UTC)
  • Oppose There's no indication that this is a problem needs fixing. SWATJester 00:55, 5 December 2024 (UTC)
  • Support Admins who don't use the tools should not have the tools. * Pppery * it has begun... 03:55, 5 December 2024 (UTC)
  • Oppose While I have never accepted "not all admin actions are logged" as a realistic reason for no logged actions in an entre year, I just don't see what problematic group of admins this is in response to. Previous tweaks to the rules were in response to admins that seemed to be gaming the system, that were basically inactive and when they did use the tools they did it badly, etc. We don't need a rule that ins't pointed a provable, ongoing problem. Just Step Sideways 19:19, 8 December 2024 (UTC)
  • Oppose If an admin is still editing, it's not unreasonable to assume that they are still up to date with policies, community norms etc. I see no particular risk in allowing them to keep their tools. Scribolt (talk) 19:46, 8 December 2024 (UTC)
  • Oppose: It feels like some people are trying to accelerate admin attrition and I don't know why. This is a solution in search of a problem. Gnomingstuff (talk) 07:11, 10 December 2024 (UTC)
  • Oppose Sure there is a problem, but the real problem I think is that it is puzzling why they are still admins. Perhaps we could get them all to make a periodic 'declaration of intent' or some such every five years that explains why they want to remain an admin. Alanscottwalker (talk) 19:01, 11 December 2024 (UTC)
  • Oppose largely per scribolt. We want to take away mops from inactive accounts where there is a risk of them being compromised, or having got out of touch with community norms, this proposal rather targets the admins who are active members of the community. Also declining incorrect deletion tags and AIV reports doesn't require the use of the tools, doesn't get logged but is also an important thing for admins to do. ϢereSpielChequers 07:43, 15 December 2024 (UTC)
  • Oppose. What is the motivation for this frenzy to make more hoops for admins to jump through and use not jumping through hoops as an excuse to de-admin them? What problem does it solve? It seems counterproductive and de-inspiring when the bigger issue is that we don't have enough new admins. —David Eppstein (talk) 07:51, 17 December 2024 (UTC)
  • Oppose Some admin actions aren't logged, and I also don't see why this is necessary. Worst case scenario, we have WP:RECALL. QuicoleJR (talk) 15:25, 17 December 2024 (UTC)
  • Oppose I quite agree with David Eppstein's sentiment. What's with the rush to add more hoops? Is there some problem with the admin corps that we're not adequately dealing with? Our issue is that we have too few admins, not that we have too many. CaptainEek 23:20, 22 December 2024 (UTC)

Discussion (Admin inactivity removal)

  • Making administrative actions can be helpful to show that the admin is still up-to-date with community norms. We could argue that if someone is active but doesn't use the tools, it isn't a big issue whether they have them or not. Still, the tools can be requested back following an inactivity desysop, if the formerly inactive admin changes their mind and wants to make admin actions again. For now, I don't see any immediate issues with this proposal. Chaotic Enby (talk · contribs) 08:13, 4 December 2024 (UTC)
  • Looking back at previous RFCs, in 2011 the reasoning was to reduce the attack surface for inactive account takeover, and in 2022 it was about admins who haven't been around enough to keep up with changing community norms. What's the justification for this besides "use it or lose it"? Further, we already have a mechanism (from the 2022 RFC) to account for admins who make a few edits every year. Anomie 12:44, 4 December 2024 (UTC)
  • I also note that not all admin actions are logged. Logging editing through full protection requires abusing the Edit Filter extension. Reviewing of deleted content isn't logged at all. Who will decide whether an admin's XFD "keep" closures are really WP:NACs or not? Do adminbot actions count for the operator? There are probably more examples. Currently we ignore these edge cases since the edits will probably also be there, but now if we can desysop someone who made 100,000 edits in the year we may need to consider them. Anomie 12:44, 4 December 2024 (UTC)
    I had completely forgotten that many admin actions weren't logged (and thus didn't "count" for activity levels), that's actually a good point (and stops the "community norms" arguments as healthy levels of community interaction can definitely be good evidence of that). And, since admins desysopped for inactivity can request the tools back, an admin needing the bit but not making any logged actions can just ask for it back. At this point, I'm not sure if there's a reason to go through the automated process of desysopping/asking for resysop at all, rather than just politely ask the admin if they still need the tools.I'm still very neutral on this by virtue of it being a pretty pointless and harmless process either way (as, again, there's nothing preventing an active admin desysopped for "inactivity" from requesting the tools back), but I might lean oppose just so we don't add a pointless process for the sake of it. Chaotic Enby (talk · contribs) 15:59, 4 December 2024 (UTC)
  • To me this comes down to whether the community considers it problematic for an admin to have tools they aren't using. Since it's been noted that not all admin actions are logged, and an admin who isn't using their tools also isn't causing any problems, I'm not sure I see a need to actively remove the tools from an inactive admin; in a worst-case scenario, isn't this encouraging an admin to (potentially mis-)use the tools solely in the interest of keeping their bit? There also seems to be somewhat of a bad-faith assumption to the argument that an admin who isn't using their tools may also be falling behind on community norms. I'd certainly like to hope that if I was an admin who had been inactive that I would review P&G relevant to any admin action I intended to undertake before I executed. DonIago (talk) 15:14, 4 December 2024 (UTC)
  • As I have understood it, the original rationale for desysopping after no activity for a year was the perception that an inactive account was at higher danger of being hijacked. It had nothing to do with how often the tools were being used, and presumably, if the admin was still editing, even if not using the tools, the account was less likely to be hijacked. - Donald Albury 22:26, 4 December 2024 (UTC)
    And also, if the account of an active admin was hijacked, both the account owner and those they interact with regularly would be more likely to notice the hijacking. The sooner a hijacked account is identified as hijacked, the sooner it is blocked/locked which obviously minimises the damage that can be done. Thryduulf (talk) 00:42, 5 December 2024 (UTC)
  • I was not aware that not all admin actions are logged, obviously they should all be correctly logged as admin actions. If you're an Admin you should be doing Admin stuff, if not then you obviously don't need the tools. If an Admin is busy IRL then they can either give up the tools voluntarily or get desysopped for inactivity. The "Asking the tools back at BN can feel like a faff." isn't a valid argument, if an Admin has been desysopped for inactivity then getting the tools back should be "a faff". Regarding the comment that "There's no indication that this is a problem needs fixing." the problem is Admins who don't undertake admin activity, don't stay up to date with policies and norms, but don't voluntarily give up the tools. The 2022 change was about total edits over 5 years, not specifically admin actions and so didn't adequately address the issue. Mztourist (talk) 03:23, 5 December 2024 (UTC)
    obviously they should all be correctly logged as admin actions - how would you log actions that are administrative actions due to context/requiring passive use of tools (viewing deleted content, etc.) rather than active use (deleting/undeleting, blocking, and so on)/declining requests where accepting them would require tool use? (e.g. closing various discussions that really shouldn't be NAC'd, reviewing deleted content, declining page restoration) Maybe there are good ways of doing that, but I haven't seen any proposed the various times this subject came up. Unless and until "soft" admin actions are actually logged somehow, "editor has admin tools and continues to engage with the project by editing" is the closest, if very imperfect, approximation to it we have, with criterion 2 sort-of functioning to catch cases of "but these specific folks edit so little over a prolonged time that it's unlikely they're up-to-date and actively engaging in soft admin actions". (I definitely do feel criterion 2 could be significantly stricter, fwiw) AddWittyNameHere 05:30, 5 December 2024 (UTC)
    Not being an Admin I have no idea how their actions are or aren't logged, but is it a big ask that Admins perform at least a few logged Admin actions in a year? The "imperfect, approximation" that "editor has admin tools and continues to engage with the project by editing" is completely inadequate to capture Admin inactivity. Mztourist (talk) 07:06, 6 December 2024 (UTC)
    Why is it "completely inadequate"? Thryduulf (talk) 10:32, 6 December 2024 (UTC)
    I've been a "hawk" regarding admin activity standards for a very long time, but this proposal comes off as half-baked. The rules we have now are the result of careful consideration and incremental changes aimed at specific, provable issues with previous standards. While I am not a proponent of "not all actions are logged" as a blanket excuse for no logged actions in several years, it is feasible that an admin could be otherwise fully engaged with the community while not having any logged actions. We haven't been having trouble with admins who would be removed by this, so where's the problem? Just Step Sideways 19:15, 8 December 2024 (UTC)

"Blur all images" switch

Although i know that WP:NOTCENSORED, i propose that the Vector 2022 and Minerva Neue skins (+the Misplaced Pages mobile apps) have a "blur all images" toggle that blurs all the images on all pages (requiring clicking on them to view them), which simplifies the process of doing HELP:NOSEE as that means:

  1. You don't need to create an account to hide all images.
  2. You don't need any complex JavaScript or CSS installation procedures. Not even browser extensions.
  3. You can blur all images in the mobile apps, too.
  4. It's all done with one push of a button. No extra steps needed.
  5. Blurring all images > hiding all images. The content of a blurred image could be easily memorized, while a completely hidden image is difficult to compare to the others.

And it shouldn't be limited to just Misplaced Pages. This toggle should be available on all other WMF projects and MediaWiki-powered wikis, too. 67.209.128.126 (talk) 15:26, 5 December 2024 (UTC)

Sounds good. Damon will be thrilled. Martinevans123 (talk) 15:29, 5 December 2024 (UTC)
Sounds like something I can try to make a demo of as a userscript! Chaotic Enby (talk · contribs) 15:38, 5 December 2024 (UTC)
User:Chaotic Enby/blur.js should do the job, although I'm not sure how to deal with the Page Previews extension's images. Chaotic Enby (talk · contribs) 16:16, 5 December 2024 (UTC)
Wow, @Chaotic Enby, is that usable on all skins/browsers/devices? If so, we should be referring people to it from everywhere instead of the not-very-helpful WP:NOSEE, which I didn't even bother to try to figure out. Valereee (talk) 15:00, 17 December 2024 (UTC)
I haven't tested it beyond my own setup, although I can't see reasons why it wouldn't work elsewhere. However, there are two small bugs I'm not sure how to fix: when loading a new page, the images briefly show up for a fraction of a second before being blurred; and the images in Page Previews aren't blurred (the latter, mostly because I couldn't get the html code for the popups). Chaotic Enby (talk · contribs) 16:57, 17 December 2024 (UTC)
Ah, yes, I see both of those. Probably best to get at least the briefly-showing bug fixed before recommending it generally. The page previews would be good to fix but may be less of an issue for recommending generally, since people using that can be assumed to know how to turn it off. Valereee (talk) 18:28, 17 December 2024 (UTC)
I don't think there's a way to get around when the Javascript file is loaded and executed. I think users will have to modify their personal CSS file to blur images on initial load, much like the solution described at Help:Options to hide an image § Hide all images until they are clicked on. isaacl (talk) 18:41, 17 December 2024 (UTC)
@Valereee -- the issue with a script would be as follows:
  1. Even for logged-in users, user scripts are a moderate barrier to install (digging through settings, or worse still, having to copy-paste to the JS/CSS user pages).
  2. The majority of readers do not have an account, and the overwhelming majority of all readers make zero edits. For many people, it's too much of a hassle to sign up (or they can't remember their password, or a number of other reasons etc, etc)
What all readers and users have, though, is this menu:
I say instead of telling the occasional IP or user who complains to install a script (there are probably many more people who object to NOTCENSORED, but don't want to or don't know how to voice objections), we could add the option to replace all images with a placeholder (or blur) and perhaps also an option to increase thumbnail size.
On the image blacklist aspect, doesn't Anomie have a script that hides potentially offensive images? I've not a clue how it works, but perhaps it could be added to the appearance menu (I don't support this myself, for a number of reasons)
JayCubby 18:38, 17 December 2024 (UTC)
That's User:Anomie/hide-images, which is already listed on WP:NOSEE. I wrote it a long time ago as a joke for one of these kinds of discussions: it does very well at hiding all "potentially offensive" images because it hides all images. But people who want to have to click to see any images found it useful enough to list it on WP:NOSEE. Anomie 22:52, 17 December 2024 (UTC)
Out of curiosity, how does it filter for potentially offensive images? The code at user:Anomie/hide-images.js seems rather minimal (as I write this, I realize it may work by hiding all images, so I may have answered my own question). JayCubby 22:58, 17 December 2024 (UTC)
because it hides all images isaacl (talk) 23:11, 17 December 2024 (UTC)
Will be a problem for non registered users, as the default would clearly to leave images in blurred for them. — Masem (t) 15:40, 5 December 2024 (UTC)
Better show all images by default for all users. If you clear your cookies often you can simply change the toggle every time. 67.209.128.132 (talk) 00:07, 6 December 2024 (UTC)
That's my point: if you are unregistered, you will see whatever the default setting is (which I assume will be unblurred, which might lead to more complaints). We had similar problems dealing with image thumbnail sizes, a setting that unregistered users can't adjust. Masem (t) 01:10, 6 December 2024 (UTC)
I'm confused about how this would lead to more complaints. Right now, logged-out users see every image without obfuscation. After this toggle rolls out, logged-out users would still see every image without obfuscation. What fresh circumstance is leading to new complaints? ꧁Zanahary07:20, 12 December 2024 (UTC)
Well, we'd be putting in an option to censor, but not actively doing it. People will have issues with that. Lee Vilenski 10:37, 12 December 2024 (UTC)
Isn't the page Help:Options to hide an image "an option to censor" we've put in? Gråbergs Gråa Sång (talk) 11:09, 12 December 2024 (UTC)
I'm not opposed to this, if it can be made to work, fine. Gråbergs Gråa Sång (talk) 19:11, 5 December 2024 (UTC)
What would be the goal of a blur all images option? It seems too tailored. But a "hide all images" could be suitable. EEpic (talk) 06:40, 11 December 2024 (UTC)
Simply removing them might break page layout, so images could be replaced with an equally sized placeholder. JayCubby 13:46, 13 December 2024 (UTC)

Could there be an option to simply not load images for people with a low-bandwidth connection or who don't want them? Travellers & Tinkers (talk) 16:36, 5 December 2024 (UTC)

I agree. This way, the options would go as
  • Show all images
  • Blur all images
  • Hide all images
It would honestly be better with your suggestion. 67.209.128.132 (talk) 00:02, 6 December 2024 (UTC)
Of course, it will do nothing to appease the "These pics shouldn't be on WP at all" people. Gråbergs Gråa Sång (talk) 06:52, 6 December 2024 (UTC)
“Commons be thataway” is what we should tell them Dronebogus (talk) 18:00, 11 December 2024 (UTC)
I suggest that the "hide all images" display file name if possible. Between file name and caption (which admittedly are often similar, but not always), there should be sufficient clue whether an image will be useful (and some suggestion, but not reliably so, if it may offend a sensibility.) -- Nat Gertler (talk) 17:59, 11 December 2024 (UTC)
For low-bandwidth or expensive bandwidth -- many folks are on mobile plans which charge for bandwidth. -- Nat Gertler (talk) 14:28, 11 December 2024 (UTC)

Regarding not limiting image management choices to Misplaced Pages: that's why it's better to manage this on the client side. Anyone needing to limit their bandwidth usage, or to otherwise decide individually on whether or not to load each photo, will likely want to do this generally in their web browsing. isaacl (talk) 18:43, 6 December 2024 (UTC)

Definitely a browser issue. You can get plug-ins for Chrome right now that will do exactly this, and there's no need for Misplaced Pages/Mediawiki to implent anything. — The Anome (talk) 18:48, 6 December 2024 (UTC)

I propose something a bit different: all images on the bad images list can only be viewed with a user account that has been verified to be over 18 with government issued ID. I say this because in my view there is absolutely no reason for a minor to view it. Jayson (talk) 23:41, 8 December 2024 (UTC)

Well, that means readers will be forced to not only create an account, but also disclose sensitive personal information, just to see encyclopedic images. That is pretty much the opposite of a 💕. Chaotic Enby (talk · contribs) 23:44, 8 December 2024 (UTC)
I can support allowing users to opt to blu4 or hide some types of images, but this needs to be an opt-in only. By default, show all images. And I'm also opposed to any technical restriction which requires self-identification to overcome, except for cases where the Foundation deems it necessary to protect private information (checkuser, oversight-level hiding, or emails involving private information). Please also keep in mind that even if a user sends a copy of an ID which indicates the individual person's age, there is no way to verify that it was the user's own ID whuch had been sent. Animal lover |666| 11:25, 9 December 2024 (UTC)
Also, the bad images list is a really terrible standard. Around 6% of it is completely harmless content that happened to be abused. And even some of the “NSFW” images are perfectly fine for children to view, for example File:UC and her minutes-old baby.jpg. Are we becoming Texas or Florida now? Dronebogus (talk) 18:00, 11 December 2024 (UTC)
You could've chosen a much better example like dirty toilet or the flag of Hezbollah... Traumnovelle (talk) 19:38, 11 December 2024 (UTC)
Well, yes, but I rank that as “harmless”. I don’t know why anyone would consider a woman with her newborn baby so inappropriate for children it needs to be censored like hardcore porn. Dronebogus (talk) 14:53, 12 December 2024 (UTC)
The Hezbollah flag might be blacklisted because it's copyrighted, but placed in articles by uninformed editors (though one of JJMC89's bots automatically removes NFC files from pages). We have File:InfoboxHez.PNG for those uses. JayCubby 16:49, 13 December 2024 (UTC)
I support this proposal. It’s a very clean compromise between the “think of the children” camp and the “freeze peach camp”. Dronebogus (talk) 17:51, 11 December 2024 (UTC)
Let me dox myself so I can view this image. Even Google image search doesn't require something this stringent. Lee Vilenski 19:49, 11 December 2024 (UTC)
oppose should not be providing toggles to censor. ValarianB (talk) 15:15, 12 December 2024 (UTC)
What about an option to disable images entirely? It might use significantly less data. JayCubby 02:38, 13 December 2024 (UTC)
This is an even better idea as an opt-in toggle than the blur one. Load no images by default, and let users click a button to load individual images. That has a use beyond sensitivity. ꧁Zanahary02:46, 13 December 2024 (UTC)
Yes I like that idea even better. I think in any case we should use alt text to describe the image so people don’t have to play Russian roulette based on potentially vague or nonexistent descriptions, i.e. without alt text an ignorant reader would have no idea the album cover for Virgin Killer depicts a nude child in a… questionable pose. Dronebogus (talk) 11:42, 13 December 2024 (UTC)
An option to replace images with alt text seems both much more useful and much more neutral as an option. There are technical reasons why a user might want to not load images (or only selectively load them based on the description), so that feels more like a neutral interface setting. An option to blur images by default sends a stronger message that images are dangerous.--Trystan (talk) 16:24, 13 December 2024 (UTC)
Also it'd negate the bandwidth savings somewhat (assuming an image is displayed as a low pixel-count version). I'm of the belief that Misplaced Pages should have more features tailored to the reader. JayCubby 16:58, 13 December 2024 (UTC)
At the very least, add a filter that allows you to block all images on the bad image list, specifically that list and those images. To the people who say you shouldnt have to give up personal info, I say that we should go the way Roblox does. Seems a bit random, hear me out: To play 17+ games, you need to verify with gov id, those games have blood alcohol, unplayable gambling and "romance". I say that we do the same. Giving up personal info to view bad things doesn't seem so bad to me. Jayson (talk) 03:44, 15 December 2024 (UTC)
Building up a database of people who have applied to view bad things on a service that's available in restrictive regimes sounds like a way of putting our users in danger. -- Nat Gertler (talk) 07:13, 15 December 2024 (UTC)
Roblox =/= Misplaced Pages. I don’t know why I have to say this, nor did I ever think I would. And did you read what I already said about the “bad list”? Do you want people to have to submit their ID to look at poop, a woman with her baby, the Hezbollah flag, or graffiti? How about we age-lock articles about adult topics next? Dronebogus (talk) 15:55, 15 December 2024 (UTC)
Ridiculous. Lee Vilenski 16:21, 15 December 2024 (UTC)
So removing a significant thing that makes Misplaced Pages free is worth preventing underaged users from viewing certain images? I wouldn't say that would be a good idea if we want to make Misplaced Pages stay successful. If a reader wants to read an article, they should expect to see images relevant to the topic. This includes topics that are usually considered NSFW like Graphic violence, Sexual intercourse, et cetera. If a person willingly reads an article about an NSFW topic, they should acknowledge that they would see topic-related NSFW images. ZZ'S 16:45, 15 December 2024 (UTC)
What "bad things"? You haven't listed any. --User:Khajidha (talk) (contributions) 15:57, 17 December 2024 (UTC)
This is moot. Requiring personal information to use Misplaced Pages isn't something this community even has the authority to do. Valereee (talk) 16:23, 17 December 2024 (UTC)
Yes, if this happens it should be through a disable all images toggle, not an additional blur. There have been times that would have been very helpful for me. CMD (talk) 03:52, 15 December 2024 (UTC)
Support the proposal as written. I'd imagine WMF can add a button below the already-existing accessibility options. People have different cultural, safety, age, and mental needs to block certain images. Ca 13:04, 15 December 2024 (UTC)
I'd support an option to replace images with the alt text, as long as all you had to do to see a hidden image was a single click/tap (we'd need some fallback for when an image has no alt text, but that's a minor issue). Blurring images doesn't provide any significant bandwidth benefits and could in some circumstances cause problems (some blurred innocent images look very similar to some blurred blurred images that some people regard as problematic, e.g. human flesh and cooked chicken). I strongly oppose anything that requires submitting personal information of any sort in order to see images per NatGertler. Thryduulf (talk) 14:15, 15 December 2024 (UTC)
Fallback for alt text could be filename, which is generally at least slightly descriptive. -- Nat Gertler (talk) 14:45, 15 December 2024 (UTC)

Class icons in categories

This is something that has frequently occurred to me as a potentially useful feature when browsing categories, but I have never quite gotten around to actually proposing it until now.

Basically, I'm thinking it could be very helpful to have content-assessment class icons appear next to article entries in categories. This should be helpful not only to readers, to guide them to the more complete entries, but also to editors, to alert them to articles in the category that are in need of work. Thoughts? Gatoclass (talk) 03:02, 7 December 2024 (UTC)

If we go with this, I think there should be only 4 levels - Stub, Average (i.e. Start, C, or B), GA, & FA.
There are significant differences between Start, C, and B, but there's no consistent effort to grade these articles correctly and consistently, so it might be better to lump them into one group. Especially if an article goes down in quality, almost nobody will bother to demote it from B to C. ypn^2 04:42, 8 December 2024 (UTC)
Isn't that more of an argument for consolidation of the existing levels rather than reducing their number for one particular application?
Other than that, I think I would have to agree that there are too many levels - the difference between Start and C class, for example, seems quite arbitrary, and I'm not sure of the usefulness of A class - but the lack of consistency within levels is certainly not confined to these lower levels, as GAs can vary enormously in quality and even FAs. But the project nonetheless finds the content assessment model to be useful, and I still think their usefulness would be enhanced by addition to categories (with, perhaps, an ability to opt in or out of the feature).
I might also add that including content assessment class icons to categories would be a good way to draw more attention to them and encourage users to update them when appropriate. Gatoclass (talk) 14:56, 8 December 2024 (UTC)
I believe anything visible in reader-facing namespaces needs to be more definitively accurate than in editor-facing namespaces. So I'm fine having all these levels on talk pages, but not on category pages, unless they're applied more rigorously.
On the other hand, with FAs and GAs, although standards vary within a range, they do undergo a comprehensive, well-documented, and consistent process for promotion and demotion. So just like we have an icon at the top of those articles (and in the past, next to interwiki links), I could hear putting them in categories. ypn^2 18:25, 8 December 2024 (UTC)
Isn't the display of links Category pages entirely dependent on the Mediawiki software? We don't even have Short descriptions displayed, which would probably be considerably more useful.Any function that has to retrieve content from member articles (much less their talkpages) is likely to be somewhat computationally expensive. Someone with more technical knowledge may have better information. Folly Mox (talk) 18:01, 8 December 2024 (UTC)
Yes, this will definitely require MediaWiki development, but probably not so complex. And I wonder why this will be more computationally expensive than scanning articles for ] tags in the first place. ypn^2 18:27, 8 December 2024 (UTC)
And I wonder why this will be more computationally expensive than scanning articles for ] tags in the first place my understanding is that this is not what happens. When a category is added to or removed from an article, the software adds or removes that page as a record from a database, and that database is what is read when viewing the category page. Thryduulf (talk) 20:14, 8 December 2024 (UTC)
I think that in the short term, this could likely be implemented using a user script (displaying short descriptions would also be nice). Longer-term, if done via an extension, I suggest limiting the icons to GAs and FAs for readers without accounts, as other labels aren't currently accessible to them. (Whether this should change is a separate but useful discussion). — Frostly (talk) 23:06, 8 December 2024 (UTC)
I'd settle for a user script. Who wants to write it? :) Gatoclass (talk) 23:57, 8 December 2024 (UTC)
As an FYI for whoever decides to write it, Special:ApiHelp/query+pageassessments may be useful to you. Anomie 01:04, 9 December 2024 (UTC)
@Gatoclass, the Misplaced Pages:Metadata gadget already exists. Go to Special:Preferences#mw-prefsection-gadgets-gadget-section-appearance and scroll about two-thirds of the way through that section.
I strongly believe that ordinary readers don't care about this kind of inside baseball, but if you want it for yourself, then use the gadget or fork its script. Changing this old gadget from "adding text and color" to "displaying an icon" should be relatively simple. WhatamIdoing (talk) 23:43, 12 December 2024 (UTC)
  • I strongly oppose loading any default javascript solution that would cause hundreds of client-side queries every time a category page is opened. As far as making an upstream software request, there are multiple competing page quality metrics and schemes that would need to be reviewed. — xaosflux 15:13, 18 December 2024 (UTC)

Cleaning up NA-class categories

We have a long-standing system of double classification of pages, by quality (stub, start, C, ...) and importance (top, high, ...). And then there are thousands of pages that don't need either of these; portals, redirects, categories, ... As a result most of these pages have a double or even triple categorization, e.g. Portal talk:American Civil War/This week in American Civil War history/38 is in Category:Portal-Class United States articles, Category:NA-importance United States articles, and Category:Portal-Class United States articles of NA-importance.

My suggestion would be to put those pages only in the "Class" category (in this case Category:Portal-Class United States articles), and only give that category a NA-rating. Doing this for all these subcats (File, Template, ...) would bring the at the moment 276,534 (!) pages in Category:NA-importance United States articles back to near-zero, only leaving the anomalies which probably need a different importance rating (and thus making it a useful cleanup category).

It is unclear why we have two systems (3 cat vs. 2 cat), the tags on Category talk:2nd millennium in South Carolina (without class or NA indication) have a different effect than the tags on e.g. Category talk:4 ft 6 in gauge railways in the United Kingdom, but my proposal is to make the behaviour the same, and in both cases to reduce it to the class category only (and make the classes themselve categorize as "NA importance"). This would only require an update in the templates/modules behind this, not on the pages directly, I think. Fram (talk) 15:15, 9 December 2024 (UTC)

Are there any pages that don't have the default? e.g. are there any portals or Category talk: pages rated something other than N/A importance? If not then I can't see any downsides to the proposal as written. If there are exceptions, then as long as the revised behaviour allows for the default to be overwritten when desired again it would seem beneficial. Thryduulf (talk) 16:36, 9 December 2024 (UTC)
As far as I know, there are no exceptions. And I believe that one can always override the default behaviour with a local parameter. @Tom.Reding: I guess you know these things better and/or knows who to contact for this. Fram (talk) 16:41, 9 December 2024 (UTC)
Looking a bit further, there do seem to be exceptions, but I wonder why we would e.g. have redirects which are of high importance to a project (Category:Redirect-Class United States articles of High-importance). Certainly when one considers that in some cases, the targets have a lower importance than the redirects? E.g. Talk:List of Mississippi county name etymologies. Fram (talk) 16:46, 9 December 2024 (UTC)
I was imagining high importance United States redirects to be things like USA but that isn't there and what is is a very motley collection. I only took a look at one, Talk:United States women. As far as I can make out the article was originally at this title but later moved to Women in the United States over a redirect. Both titles had independent talk pages that were neither swapped nor combined, each being rated high importance when they were the talk page of the article. It seems like a worthwhile exercise for the project to determine whether any of those redirects are actually (still?) high priority but that's independent of this proposal. Thryduulf (talk) 17:17, 9 December 2024 (UTC)
Category:Custom importance masks of WikiProject banners (15) is where to look for projects that might use an importance other than NA for cats, or other deviations.   ~ Tom.Reding (talkdgaf17:54, 9 December 2024 (UTC)
Most projects don't use this double intersection (as can be seen by the amount of categories in Category:Articles by quality and importance, compared to Category:GA-Class articles). I personally feel that the bot updated page like User:WP 1.0 bot/Tables/Project/Television is enough here and requires less category maintenance (creating, moving, updating, etc.) for a system that is underused. Gonnym (talk) 17:41, 9 December 2024 (UTC)
Support this, even if there might be a few exceptions, it will make them easier to spot and deal with rather than having large unsorted NA-importance categories. Chaotic Enby (talk · contribs) 18:04, 9 December 2024 (UTC)
Strongly agree with this. It's bizarre having two different systems, as well as a pain in the ass sometimes. Ideally we should adopt a single consistent categorization system for importance/quality. – Closed Limelike Curves (talk) 22:56, 16 December 2024 (UTC)

Okay, does anyone know what should be changed to implement this? I presume this comes from Module:WikiProject banner, I'll inform the people there about this discussion. Fram (talk) 14:49, 13 December 2024 (UTC)

So essentially what you are proposing is to delete Category:NA-importance articles and all its subcategories? I think it would be best to open a CfD for this, so that the full implications can be discussed and consensus assured. It is likely to have an effect on assessment tools, and tables such as User:WP 1.0 bot/Tables/Project/Africa would no longer add up to the expected number — Martin (MSGJ · talk) 22:13, 14 December 2024 (UTC)
There was a CfD specifically for one, and the deletion of Category:Category-Class Comics articles of NA-importance doesn't seem to have broken anything so far. A CfD for the deletion of 1700+ pages seems impractical, an RfC would be better probably. Fram (talk) 08:52, 16 December 2024 (UTC)
Well a CfD just got closed with 14,000 categories, so that is not a barrier. It is also the technically correct venue for such discussions. By the way, all of the quality/importance intersection categories check that the category exists before using it, so deleting them shouldn't break anything. — Martin (MSGJ · talk) 08:57, 16 December 2024 (UTC)
And were all these cats tagged, or how was this handled? Fram (talk) 10:21, 16 December 2024 (UTC)
Misplaced Pages:Categories for discussion/Log/2024 December 7#Category:Category-Class articles. HouseBlaster took care of listing each separate cateory on the working page. — Martin (MSGJ · talk) 10:43, 16 December 2024 (UTC)
I have no idea what the "working page" is though. Fram (talk) 11:02, 16 December 2024 (UTC)

I'm going to have to oppose any more changes to class categories. Already changes are causing chaos across the system with the bots unable to process renamings and fixing redirects whilst Special:Wantedcategories is being overwhelmed by the side effects. Quite simply we must have no more changes that cannot be properly processed. Any proposal must have clear instructions posted before it is initiated, not some vague promise to fix a module later on. Timrollpickering (talk) 13:16, 16 December 2024 (UTC)

Then I'm at an impasse. Module people tell me "start a CfD", you tell me "no CfD, first make changes at the module". No one wants the NA categories for these groups. What we can do is 1. RfC to formalize that they are unwanted, 2. Change module so they no longer get populated 3. Delete the empty cats caused by steps 1 and 2. Is that a workable plan for everybody? Fram (talk) 13:39, 16 December 2024 (UTC)
I don't think @Timrollpickering was telling you to make the changes at the module first, rather to prepare the changes in advance so that the changes can be implemented as soon as the CfD reaches consensus. For example this might be achieved by having a detailed list of all the changes prepared and published in a format that can be fed to a bot. For a change of this volume though I do think a discussion as well advertised as an RFC is preferable to a CfD though. Thryduulf (talk) 14:43, 16 December 2024 (UTC)
Got it in one. There are just too many problems at the moment because the modules are not being properly amended in time. We need to be firmer in requiring proponents to identify the how to change before the proposal goes live so others can enact it if necessary, not close the discussion, slap the category on the working page and let a mess pile up whilst no changes to the module are implemented. Timrollpickering (talk) 19:37, 16 December 2024 (UTC)
Oh, I got it as well, but at the module talk page, I was told to first have a CfD (to determine consensus first I suppose, instead of writing the code without knowing if it will be implemented). As I probably lack the knowledge to make the correct module changes, I'm at an impasse. That's why I suggested an RfC instead of a CfD to determine the consensus for "deletion after the module has been changed", instead of a CfD which is more of the "delete it now" variety. No one here has really objected to the deletion per se, but I guess that a more formal discussion might be welcome. Fram (talk) 10:09, 17 December 2024 (UTC)
  • Oppose on the grounds that I think the way we do it currently is fine. PARAKANYAA (talk) 05:33, 18 December 2024 (UTC)
    • What's the benefit of having two or three categories for the same group of pages? We have multiple systems (with two or three cats, and apparently other ones as well), with no apparent reason to keep this around. As an example, we have Category:Category-Class film articles with more than 50,000 pages, e.g. Category talk:20th century in American cinema apparently. But when I go to that page, it isn't listed in that category, it is supposedly listed in Category:NA-Class film articles (which seems to be a nonsense category, we shouldn't have NA-class, only NA-importance). but that category doesn't contain that page. So now I have no idea what's going on or what any of this is trying to achieve. Fram (talk) 08:30, 18 December 2024 (UTC)
      Something changed recently. I think. But it is useful to know which NA pages are tagged with a project with a granularity beyond just "Not Article". It helps me do maintenance and find things that are tagged improperly, especially with categories. I do not care what happens to the importance ratings. PARAKANYAA (talk) 09:20, 18 December 2024 (UTC)

Category:Current sports events

I would like to propose that sports articles should be left in the Category:Current sports events for 48 hours after these events have finished. I'm sure many Misplaced Pages sports fans (including me) open CAT:CSE first and then click on a sporting event in that list. And we would like to do so in the coming days after the event ends to see the final standings and results.

Currently, this category is being removed from articles too early, sometimes even before the event ends. Just like yesterday. AnishaShar, what do you say about that?

So I would like to ask you to consider my proposal. Or, if you have a better suggestion, please comment. Thanks, Maiō T. (talk) 16:25, 9 December 2024 (UTC)

Thank you for bringing up this point. I agree that leaving articles in the Category:Current sports events for a short grace period after the event concludes—such as 48 hours—would benefit readers who want to catch up on the final standings and outcomes. AnishaShar (talk) 18:19, 9 December 2024 (UTC)
Sounds reasonable on its face. Gatoclass (talk) 23:24, 9 December 2024 (UTC)
How would this be policed though? Usually that category is populated by the {{current sport event}} template, which every user is going to want to remove immediately after it finishes. Lee Vilenski 19:51, 11 December 2024 (UTC)
@Lee Vilenski: First of all, the Category:Current sports events has nothing to do with the Template:Current sport; articles are added to that category in the usual way.
You ask how it would be policed. Simply, we will teach editors to do it that way – to leave an article in that category for another 48 hours. AnishaShar have already expressed their opinion above. WL Pro for life is also known for removing 'CAT:CSE's from articles. I think we could put some kind of notice in that category so other editors can notice it. We could set up a vote here. Maybe someone else will have a better idea. Maiō T. (talk) 20:25, 14 December 2024 (UTC)
Would it not be more suitable for a "recently completed sports event" category. It's pretty inaccurate to say it's current when the event finished over a day ago. Lee Vilenski 21:03, 14 December 2024 (UTC)

Okay Lee, that's also a good idea. We have these two sports event categories:

I don't have any objection to a Recent sports events category being added, but personally, if I want to see results of recent sports events, I would be more likely to go to Category:December 2024 sports events, which should include all recent events. Edin75 (talk) 23:30, 16 December 2024 (UTC)
Did this get the go-ahead then? I see a comment has been added to the category, and my most recent edit was reverted when I removed the category after an event finished. I didn't see any further discussion after my last comment. Edin75 (talk) 09:37, 25 December 2024 (UTC)

User-generated conflict maps

In a number of articles we have (or had) user-generated conflict maps. I think the mains ones at the moment are Syrian civil war and Russian invasion of Ukraine. The war in Afghanistan had one until it was removed as poorly-sourced in early 2021. As you can see from a brief review of Talk:Syrian civil war the map has become quite controversial there too.

My personal position is that sourcing conflict maps entirely from reports of occupation by one side or another of individual towns at various times, typically from Twitter accounts of dubious reliability, to produce a map of the current situation in an entire country (which is the process described here), is a WP:SYNTH/WP:OR. I also don't see liveuamap.com as necessarily being a highly reliable source either since it basically is an WP:SPS/Wiki-style user-generated source, and when it was discussed at RSN editors there generally agreed with that. I can understand it if a reliable source produces a map that we can use, but that isn't what's happening here.

Part of the reason this flies under the radar on Misplaced Pages is it ultimately isn't information hosted on EN WP but instead on Commons, where reliable sourcing etc. is not a requirement. However, it is being used on Misplaced Pages to present information to users and therefore should fall within our PAGs.

I think these maps should be deprecated unless they can be shown to be sourced entirely to a reliable source, and not assembled out of individual reports including unreliable WP:SPS sources. FOARP (talk) 16:57, 11 December 2024 (UTC)

A lot of the maps seem like they run into SYNTH issues because if they're based on single sources they're likely running into copyright issue as derivative works. I would agree though that if an image does not have clear sourcing it shouldn't be used as running into primary/synth issues. Der Wohltemperierte Fuchs 17:09, 11 December 2024 (UTC)
Though simple information isn't copyrightable, if it's sufficiently visually similar I suppose that might constitute a copyvio. JayCubby 02:32, 13 December 2024 (UTC)
I agree these violate OR and at least the spirit of NOTNEWS and should be deprecated. I remember during the Wagner rebellion we had to fix one that incorrectly depicted Wagner as controlling a swath of Russia. Levivich (talk) 05:47, 13 December 2024 (UTC)
Oppose: First off, I'd like to state my bias as a bit of a map geek. I've followed the conflict maps closely for years.
I think the premise of this question is flawed. Some maps may be poorly sourced, but that doesn't mean all of them are. The updates to the Syrian, Ukraine, and Burma conflicts maps are sourced to third parties. So that resolves the OR issue.
The sources largely agree with each other, which makes SYNTH irrelevant. Occasionally one source may be ahead of another by a few hours (e.g., LiveUaMap vs. ISW), but they're almost entirely in lock step.
I think this proposal throws out the baby with the bathwater. One bad map doesn't mean we stop using maps; it means we stop using bad maps.
You may not like the fact that these sources sometimes use OSI (open-source intelligence). Unfortunately, that is the nature of conflict in a zone where the press isn't allowed. Any information you get from the AP or the US government is likely to rely on the same sources.
Do they make mistakes? Probably; but so do all historical sources. And these maps have the advantage that the Commons community continuously reviews changes made by other users. Much in the same way that Misplaced Pages is often more accurate than historical encyclopedias, I believe crowdsourcing may make these maps more accurate than historical ones.
I think deprecating these maps would leave the reader at a loss (pictures speak a 1,000 words and all that). Does it get a border crossing wrong here or there? Yes, but the knowledge is largely correct.
It would be an absolute shame to lose access to this knowledge. Magog the Ogre (tc) 22:59, 19 December 2024 (UTC)
@Magog the Ogre WP:ITSUSEFUL is frowned upon as an argument for good reason. Beyond that: 1) the fact that these are based on fragmentary data is strangely not mentioned at all (Syrian civil war says 'Military situation as of December 18, 2024 at 2:00pm ET' which suggests that it's quite authoritative and should be trusted; the fact that it's based off the ISW is not disclosed.) 2) I'm not seeing where all the information is coming from the ISW. The ISW's map only covers territory, stuff like bridges, dams, "strategic hills" and the like are not present on the ISW map. Where is that info coming from? Der Wohltemperierte Fuchs 23:10, 19 December 2024 (UTC)
The Commons Syria map uses both the ISW and Liveuamap. The two are largely in agreement, with Liveuamap being more precise but using less reliable sources. If you have an issue with using Liveuamap as a source, fine, bring it up on the talk pages where it's used, or on the Commons talk page itself. But banning any any map of a conflict is throwing out the baby with the bathwater. The Ukraine map is largely based on ISW-verifiable information.
With regards to actual locations like bridges, I'm against banning Commons users from augmenting maps with easily verifiable landmarks. That definition of SYN is broad to the point of meaningless, as it would apply to any user-generated content that uses more than one source. Magog the Ogre (tc) 23:50, 20 December 2024 (UTC)
Weak Oppose I've been updating the Ukraine map since May 2022, so I hope my input is helpful. While I agree that some of the sources currently being used to update these maps may be dubious in nature, that has not always been the case. In the past, particularly for the Syria map, these maps have been considered among the most accurate online due to their quality sourcing. It used to be that a source was required for each town if it was to be displayed on these maps, but more recently, people have just accepted taking sources like LivaUAMap and the ISW and copying them exactly. Personally, I think we should keep the maps but change how they are sourced. I think that going back to the old system of requiring a reliable source for each town would clear up most of the issues that you are referring to, though it would probably mean that the maps would be less detailed than they currently are now. Physeters 07:23, 21 December 2024 (UTC)
  • Oppose The campaign maps are one of our absolute best features. The Syrian campaign map in particular was very accurate for much of the war. Having a high quality SVG of an entire country like that is awesome, and there really isn't anything else like it out there, which is why it provides such value to our readers. I think we have to recognize of our course that they're not 100% accurate, due to the fog of war. I wouldn't mind if we created subpages about the maps? Like, with a list of sources and their dates, designed to be reader facing, so that our readers could verify the control of specific towns for themselves. But getting rid of the maps altogether is throwing out the baby with the bathwater. CaptainEek 23:33, 22 December 2024 (UTC)

Google Maps: Maps, Places and Routes

Google Maps#Google Maps API

Google Maps have the following categories: Maps, Places and Routes

for example: https://www.google.com/maps/place/Sheats+Apartments/@34.0678041,-118.4494914,3a,75y,90t/data=!...........

most significant locations have a www.google.com/maps/place/___ URL

these should be acknowledged and used somehow, perhaps geohack

69.181.17.113 (talk) 00:22, 12 December 2024 (UTC)

What is the proposal here? If its for the google maps article, that would be more suitable for the talk page. As I see it, your proposal is simply saying that google maps has an api and we should use it for... something. I could be missing something, though Mgjertson (talk) 08:20, 17 December 2024 (UTC)
As I understand it, the IP is proposing embeds of google maps, which would be nice from a functionality standpoint (the embedded map is kinda-rather buggy), but given Google is an advertising company, isn't great from a privacy standpoint. JayCubby 16:25, 17 December 2024 (UTC)
I think they're proposing the use of external links rather than embedding. jlwoodwa (talk) 18:16, 17 December 2024 (UTC)

Allowing page movers to enable two-factor authentication

I would like to propose that members of the page mover user group be granted the oathauth-enable permission. This would allow them to use Special:OATH to enable two-factor authentication on their accounts.

Rationale (2FA for page movers)

The page mover guideline already obligates people in that group to have a strong password, and failing to follow proper account security processes is grounds for revocation of the right. This is because the group allows its members to (a) move pages along with up to 100 subpages, (b) override the title blacklist, and (c) have an increased rate limit for moving pages. In the hands of a vandal, these permissions could allow significant damage to be done very quickly, which is likely to be difficult to reverse.

Additionally, there is precedent for granting 2FA access to users with rights that could be extremely dangerous in the event of account compromise, for instance, template editors, importers, and transwiki importers have the ability to enable this access, as do most administrator-level permissions (sysop, checkuser, oversight, bureaucrat, steward, interface admin).

Discussion (2FA for page movers)

  • Support as proposer. JJPMaster (she/they) 20:29, 12 December 2024 (UTC)
  • Support (but if you really want 2FA you can just request permission to enable it on Meta) * Pppery * it has begun... 20:41, 12 December 2024 (UTC)
    For the record, I do have 2FA enabled. JJPMaster (she/they) 21:47, 12 December 2024 (UTC)
    Oops, that says you are member of "Two-factor authentication testers" (testers = good luck with that). Johnuniq (talk) 23:52, 14 December 2024 (UTC)
    A group name which is IMO seriously misleading - 2FA is not being tested, it's being actively used to protect accounts. * Pppery * it has begun... 23:53, 14 December 2024 (UTC)
    meta:Help:Two-factor authentication still says "currently in production testing with administrators (and users with admin-like permissions like interface editors), bureaucrats, checkusers, oversighters, stewards, edit filter managers and the OATH-testers global group." Hawkeye7 (discuss) 09:42, 15 December 2024 (UTC)
  • Support as a pagemover myself, given the potential risks and need for increased security. I haven't requested it yet as I wasn't sure I qualified and didn't want to bother the stewards, but having oathauth-enable by default would make the process a lot more practical. Chaotic Enby (talk · contribs) 22:30, 12 December 2024 (UTC)
    Anyone is qualified - the filter for stewards granting 2FA is just "do you know what you're doing". * Pppery * it has begun... 22:46, 12 December 2024 (UTC)
  • Question When's the last time a page mover has had their account compromised and used for pagemove vandalisn? Edit 14:35 UTC: I'm not doubting the nom, rather I'm curious and can't think of a better way to phrase things. JayCubby 02:30, 13 December 2024 (UTC)
  • Why isn't everybody allowed to enable 2FA? I've never heard of any other website where users have to go request someone's (pro forma, rubber-stamp) permission if they want to use 2FA. And is it accurate that 2FA, after eight years, is still "experimental" and "in production testing"? I guess my overall first impression didn't inspire me with confidence in the reliability and maintenance. Adumbrativus (talk) 06:34, 14 December 2024 (UTC)
    For TOTP (the 6-digit codes), it's not quite as bad as when it was written, as the implementation has been fixed over time. I haven't heard nearly as many instances of backup scratch codes not working these days compared to when it was new. The WebAuthn (physical security keys, Windows Hello, Apple Face ID, etc) implementation works fine on private wikis but I wouldn't recommend using it for CentralAuth, especially with the upcoming SUL3 migration. There's some hope it'll work better afterward, but will still require some development effort. As far as I'm aware, WMF is not currently planning to work on the 2FA implmentation. As far as risk for page mover accounts goes, they're at a moderate risk. Page move vandalism, while annoying to revert, is reversible and is usually pretty loud (actions of compromised accounts can be detected and stopped easily). The increased ratelimit is the largest concern, but compared to something like account creator (which has noratelimit) it's not too bad. I'm more concerned about new page reviewer. There probably isn't a ton of harm to enabling 2FA for these groups, but there isn't a particularly compelling need either. AntiCompositeNumber (talk) 12:47, 19 December 2024 (UTC)
  • Support per nom. PMV is a high-trust role (suppressredirect is the ability to make a blue link turn red), and thus this makes sense. As a side note, I have changed this to bulleted discussion; # is used when we have separate sections for support and oppose. HouseBlaster (talk • he/they) 07:19, 14 December 2024 (UTC)
  • Oppose As a pagemover myself, I find pagemover is an extremely useful and do not wish to lose it. It is nowhere near the same class as template editor. You can already ask the stewards for 2FA although I would recommend creating a separate account for the purpose. After all these years, 2FA remains experimental, buggy and cumbersome. Incompatible with the Microsoft Authenticator app on my iphone. Hawkeye7 (discuss) 23:59, 14 December 2024 (UTC)
    The proposal (as I read it) isn't "you must have 2FA", rather "you have the option to add it". Lee Vilenski 00:06, 15 December 2024 (UTC)
    @Hawkeye7, Lee Vilenski is correct. This would merely provide page movers with the option to enable it. JJPMaster (she/they) 00:28, 15 December 2024 (UTC)
    Understood, but I do not want it associated with an administrator-level permission, which would mean I am not permitted to use it, as I am not an admin. Hawkeye7 (discuss) 09:44, 15 December 2024 (UTC)
    It's not really that. It would be an opt-in to allow users (in the group) to put 2FA on their account - at their own digression.
    The main reasons why 2FA is currently out to admins and the like is because they are more likely to be targeted for compromising and are also more experienced. The 2FA flag doesn't require any admin skills/tools and is only incedentally linked. Lee Vilenski 12:58, 15 December 2024 (UTC)
    Wait, so why is 2FA not an option for everyone already? – Closed Limelike Curves (talk) 01:15, 18 December 2024 (UTC)
    @Closed Limelike Curves the MediaWiki's 2FA implementation is complex, and the WMF's processes to support people who get locked out of their account aren't able to handle a large volume of requests (developers can let those who can prove they are the owner of the account back in). My understanding is that the current processes cannot be efficiently scaled up either, as it requires 1:1 attention from a developer, so unless and until new processes have been designed, tested and implemented 2FA is intended to be restricted to those who understand how to use it correctly and understand the risks of getting locked out. Thryduulf (talk) 09:36, 18 December 2024 (UTC)
  • It probably won't make a huge difference because those who really desire 2FA can already request the permission to enable it for their account, and because no page mover will be required to do so. However, there will be page movers who wouldn't request a global permission for 2FA yet would enable it in their preferences if it was a simple option. And these page movers might benefit from 2FA even more than those who already care very strongly about the security of their account. ~ ToBeFree (talk) 03:18, 15 December 2024 (UTC)
  • Support and I can't think of any argument against something not only opt-in but already able to be opted into. Gnomingstuff (talk) 08:09, 15 December 2024 (UTC)
  • Oppose this is a low value permission, not needed. If an individual PMV really wants to opt-in, they can already do so over at meta - no need to build custom configuration for this locally. — xaosflux 15:06, 18 December 2024 (UTC)
  • Support; IMO all users should have the option to add 2FA. Stifle (talk) 10:26, 19 December 2024 (UTC)
  • Support All users should be able to opt in to 2FA. Lack of a scalable workflow for users locked out of their accounts is going to be addressed by WMF only if enough people are using 2FA (and getting locked out?) to warrant its inclusion in the product roadmap. – SD0001 (talk) 14:01, 19 December 2024 (UTC)
    That (and to @Stifle above) sounds like an argument to do just that - get support put in place and enable this globally, not to piecemeal it in tiny batches for discretionary groups on a single project (this custom configuration would support about 3/10ths of one percent of our active editors). To the point of this RFC, why do you think adding this for this specific tiny group is a good idea? — xaosflux 15:40, 19 December 2024 (UTC)
    FWIW, I tried to turn this on for anyone on meta-wiki, and the RFC failed (meta:Meta:Requests for comment/Enable 2FA on meta for all users). — xaosflux 21:21, 19 December 2024 (UTC)
    Exactly. Rolling it out in small batches helps build the case for a bigger rollout in the future. – SD0001 (talk) 05:24, 20 December 2024 (UTC)
    I'm pretty sure that 2FA is already available to anyone. You just have to want it enough to either request it "for testing purposes" or to go to testwiki and request that you made an admin there, which will automatically give you access. See H:ACCESS2FA. WhatamIdoing (talk) 23:41, 21 December 2024 (UTC)
    We shouldn't have to jump through borderline manipulative and social-engineering hoops to get basic security functionality.  — SMcCandlish ¢ 😼  04:40, 22 December 2024 (UTC)
  • Oppose. It sounds like account recovery when 2FA is enabled involves Trust and Safety. I don't think page movers' account security is important enough to justify increasing the burden on them. —Compassionate727  14:10, 21 December 2024 (UTC)
    Losing access to the account is less common nowadays since most 2FA apps, including Google Authenticator, have implemented cloud syncing so that even if you lose your phone, you can still access the codes from another device. – SD0001 (talk) 14:40, 21 December 2024 (UTC)
    But this isn't about Google Authenticator. Johnuniq (talk) 02:58, 22 December 2024 (UTC)
    Google Authenticator is a 2FA app, which at least till some point used to be the most popular one. – SD0001 (talk) 07:07, 22 December 2024 (UTC)
    But (I believe), it is not available for use at Misplaced Pages. Johnuniq (talk) 07:27, 22 December 2024 (UTC)
    That's not true. You can use any TOTP authenticator app for MediaWiki 2FA. I currently use Ente Auth, having moved on from Authy recently, and from Google Authenticator a few years back. In case you're thinking of SMS-based 2FA, it has become a thing of the past and is not supported by MediaWiki either because it's insecure (attackers have ways to trick your network provider to send them your texts). – SD0001 (talk) 09:19, 22 December 2024 (UTC)
  • Support. Even aside from the fact that, in 2024+, everyone should be able to turn on 2FA .... Well, absolutely certainly should everyone who has an advanced bit, with potential for havoc in the wrong hands, be able to use 2FA here. That also includes template-editor, edit-filter-manager, file-mover, account-creator (and supersets like event-coordinator), checkuser (which is not strictly tied to adminship), and probably also mass-message-sender, perhaps a couple of the others, too. Some of us old hands have several of these bits and are almost as much risk as an admin when it comes to loss of account control.  — SMcCandlish ¢ 😼  04:40, 22 December 2024 (UTC)
    Take a look at Special:ListGroupRights - much of what you mentioned is already in place, because these are groups that could use it and are widespread groups used on most WMF projects. (Unlike extendedmover). — xaosflux 17:22, 22 December 2024 (UTC)
    Re That also includes , file-mover, account-creator (and supersets like event-coordinator), and probably mass-message-sender. How can in any way would file mover, account creator, event coordinator and mass message sender user groups be considered privileged, and therefore have the oathauth-enable userright? ToadetteEdit (talk) 17:37, 24 December 2024 (UTC)
  • Comment: It is really not usual for 2FA to be available to a user group that is not defined as privileged in the WMF files. By default, all user groups defined at CommonSettings.php (iirc) that are considered to be privileged have the oathauth-enable right. Also, the account security practices mentioned in wp:PGM are also mentioned at wp:New pages patrol/Reviewers, despite not being discussed at all. Shouldn't it be fair to have the extendedmover userright be defined as privileged. ToadetteEdit (talk) 08:33, 23 December 2024 (UTC)
  • Support. Like SMcCandlish, I'd prefer that anyone, and particularly any editor with advanced perms, be allowed to turn on 2FA if they want (this is already an option on some social media platforms). But this is a good start, too.Since this is a proposal to allow page movers to opt in to 2FA, rather than a proposal to mandate 2FA for page movers, I see no downside in doing this. – Epicgenius (talk) 17:02, 23 December 2024 (UTC)
  • Support this opt-in for PMs and the broader idea of everyone having it by default. Forgive me if this sounds blunt, but is the responsibility and accountability of protecting your account lie on you and not WMF. Yes, they can assist in recovery, but the burden should not lie on them. ~/Bunnypranav:<ping> 17:13, 23 December 2024 (UTC)

Photographs by Peter Klashorst

Back in 2023 I unsuccessfully nominated a group of nude photographs by Peter Klashorst for deletion on Commons. I was concerned that the people depicted might not have been of age or consented to publication. Klashorst described himself as a "painting sex-tourist" because he would travel to third-world countries to have sex with women in brothels, and also paint pictures of them. On his Flickr account, he posted various nude photographs of African and Asian women, some of which appear to have been taken without the subjects' knowledge. Over the years, other Commons contributors have raised concerns about the Klashorst photographs (e.g. ).

I noticed recently that several of the Klashorst images had disappeared from Commons but the deletions hadn't been logged. I believe this happens when the WMF takes an office action to remove files. I don't know for sure whether that's the case, or why only a small number of the photographs were removed this way.

My proposal is that we stop using nude or explicit photographs by Klashorst in all namespaces of the English Misplaced Pages. This would affect about thirty pages, including high-traffic anatomy articles such as Buttocks and Vulva. gnu57 18:29, 16 December 2024 (UTC)

@Genericusername57: This seems as if it's essentially a request for a community sanction, and thus probably belongs better on the administrators' noticeboard. Please tell me if I am mistaken. JJPMaster (she/they) 23:12, 16 December 2024 (UTC)
@JJPMaster: I am fine with moving the discussion elsewhere, if you think it more suitable. gnu57 02:16, 17 December 2024 (UTC)
@Genericusername57: I disagree with JJPMaster in that this seems to be the right venue, but I also disagree with your proposal. Klashorst might have been a sleazeball, yes, but the images at the two listed articles do not show recognizable subjects, nor do they resemble “creepshots”, nor is there evidence they’re underage. If you object to his images you can nominate them on Commons. Your ‘23 mass nomination failed because it was extremely indiscriminate (i.e. it included a self portrait of the artist). Dronebogus (talk) 00:30, 17 December 2024 (UTC)
@Dronebogus: According to User:Lar, Commons users repeatedly contacted Klashorst, asking him to provide proof of age and consent for his models, but he did not do so. I am planning on renominating the photographs on Commons, and I think removing them from enwiki first will help avoid spurious c:COM:INUSE arguments. The self-portrait you are referring to also included another naked person. gnu57 02:16, 17 December 2024 (UTC)
@Genericusername57: replacing the ones at vulva and buttocks wouldn’t be difficult; the first article arguably violates WP:ETHNICGALLERY and conflicts with human penis only showing a single image anyway. However I think it’s best if you went to those actual articles and discussed removing them. I don’t know what other pages use his images besides his own article but they should be dealt with separately. If you want to discuss banning his photos from Wikimedia in general that’s best discussed at Commons. In all cases my personal view is that regardless of whether they actually run afoul of any laws purging creepy, exploitative pornography of third-world women is no great loss. Dronebogus (talk) 01:16, 18 December 2024 (UTC)
I have to confess that I do not remember the details of the attempts to clarify things with Peter. If this turns out to be something upon which this decision might turn, I will try to do more research. But I’m afraid it’s lost in the mists of time. ++Lar: t/c 01:25, 24 December 2024 (UTC)
Note also that further attempts to clarify matters directly with Peter will not be possible, as he is now deceased. ++Lar: t/c 15:45, 24 December 2024 (UTC)
Several issues here. First, if the files are illegal, that's a matter for Commons as they should be deleted. On the enwiki side of things, if there's doubt about legality, Commons has plenty of other photos that can be used instead. Just replace the photos. The second issue is exploitation. Commons does have commons:COM:DIGNITY which could apply, and depending on the country in which the photo was taken there may be stricter laws for publication vs. capture, but it's a hard sell to delete things on Commons if it seems like the person in the photo consented (with or without payment). The problem with removing files that may be tainted by exploitation is we'd presumably have to remove basically all images of all people who were imprisoned, enslaved, colonized, or vulnerable at the time of the photo/painting/drawing. It becomes a balance where we consider the context of the image (the specifics of when/where/how it was taken), whether the subject is still alive (probably relevant here), and encyclopedic importance. I'd be inclined to agree with some above that there aren't many photos here that couldn't be replaced with something else from Commons, but I don't think you'll find support for a formalized ban. Here's a question: what happens when you just try to replace them. As long as the photo you're replacing it with is high quality and just as relevant to the article, I don't think you'd face many challenges? — Rhododendrites \\ 16:20, 24 December 2024 (UTC)

Move the last edited notice from the bottom of the page to somewhere that's easier to find

Currently, if you want to check when the last page edit was, you have to look at the edit history or scroll all the way to the bottom of the page and look for it near the licensing info. I propose moving it under the view history and watch buttons, across from the standard "This article is from Misplaced Pages" disclaimer. Non-technical users may be put off by the behind-the-scenes nature of the page or simply not know of its existence. The Mobile site handles this quiet gracefully in my opinion. While it is still at the bottom of the page, it isn't found near Licensing talk and is a noticeable portion of the page Mgjertson (talk) 08:32, 17 December 2024 (UTC)

Editors can already enable mw:XTools § PageInfo gadget, which provides this information (and more) below the article title. I don't think non-editors would find it useful enough to be worth the space. jlwoodwa (talk) 18:12, 17 December 2024 (UTC)

I wished Misplaced Pages supported wallpapers in pages...

It would be even more awesome if we could change the wallpaper of pages in Misplaced Pages. But the fonts' colors could change to adapt to the wallpaper. The button for that might look like this: Change wallpaper Gnu779 (talk) 11:02, 21 December 2024 (UTC)

I think we already tried this. It was called Myspace ;) —TheDJ (talkcontribs) 11:51, 21 December 2024 (UTC)
See Help:User style for information on creating your own stylesheet. isaacl (talk) 18:03, 21 December 2024 (UTC)

Change page titles/names using "LGBTQ" to "LGBTQ+"

Please see my reasoning at Misplaced Pages talk:WikiProject LGBTQ+ studies#LGBTQ to LGBTQ+ (and please post your thoughts there). It was proposed that I use this page to escalate this matter, as seen on the linked talk page. Helper201 (talk) 20:42, 23 December 2024 (UTC)

Categories: