Revision as of 17:44, 4 August 2006 view sourceBetacommand (talk | contribs)86,927 edits →BetacommandBot expansion of task← Previous edit | Revision as of 18:26, 4 August 2006 view source Carnildo (talk | contribs)Extended confirmed users21,472 edits →[] Task 4, CIA: Go aheadNext edit → | ||
Line 181: | Line 181: | ||
::Just wanted to say thanks to '''alphachimp''' for his quick action to my original request in the Bot Request section. Have a good one :) ] 01:17, 31 July 2006 (UTC) | ::Just wanted to say thanks to '''alphachimp''' for his quick action to my original request in the Bot Request section. Have a good one :) ] 01:17, 31 July 2006 (UTC) | ||
:::Anybody? ] <sup>]</sup> 05:22, 2 August 2006 (UTC) | :::Anybody? ] <sup>]</sup> 05:22, 2 August 2006 (UTC) | ||
::::A simple search-and-replace? Feel free. --] 18:26, 4 August 2006 (UTC) | |||
==], MILHIST article tagging== | ==], MILHIST article tagging== |
Revision as of 18:26, 4 August 2006
Misplaced Pages process page for approving botsAll editors are encouraged to participate in the requests below – your comments are appreciated more than you may think! | Shortcuts |
New to bots on Misplaced Pages? Read these primers!
- Approval process – How these discussions work
- Overview/Policy – What bots are/What they can (or can't) do
- Dictionary – Explains bot-related jargon
To run a bot on the English Misplaced Pages, you must first get it approved. Follow the instructions below to add a request. If you are not familiar with programming consider asking someone else to run a bot for you.
Instructions for bot operators | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
Bot-related archives |
---|
Noticeboard1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15, 16, 17, 18, 19 |
Bots (talk)1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 21, 22 Newer discussions at WP:BOTN since April 2021 |
Bot policy (talk)19, 20, 21, 22, 23, 24, 25, 26, 27, 28 29, 30 Pre-2007 archived under Bots (talk) |
Bot requests1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 21, 22, 23, 24, 25, 26, 27, 28, 29, 30 31, 32, 33, 34, 35, 36, 37, 38, 39, 40 41, 42, 43, 44, 45, 46, 47, 48, 49, 50 51, 52, 53, 54, 55, 56, 57, 58, 59, 60 61, 62, 63, 64, 65, 66, 67, 68, 69, 70 71, 72, 73, 74, 75, 76, 77, 78, 79, 80 81, 82, 83, 84, 85, 86, 87 |
Bot requests (talk)1, 2 Newer discussions at WP:BOTN since April 2021 |
BRFAOld format: 1, 2, 3, 4 New format: Categorized Archive (All subpages) |
BRFA (talk)1, 2, 3, 4, 5, 6, 7, 8, 9, 10 11, 12, 13, 14, 15 Newer discussions at WP:BOTN since April 2021 |
Bot Approvals Group (talk)1, 2, 3, 4, 5, 6, 7, 8, 9 BAG Nominations |
Bot Name | Status | Created | Last editor | Date/Time | Last BAG editor | Date/Time |
---|---|---|---|---|---|---|
Tom.Bot 8 (T|C|B|F) | Open | 2024-12-27, 09:33:39 | Anomie | 2024-12-28, 16:05:09 | Anomie | 2024-12-28, 16:05:09 |
KiranBOT 14 (T|C|B|F) | Open | 2024-12-26, 23:47:23 | Usernamekiran | 2024-12-26, 23:47:23 | Never edited by BAG | n/a |
BunnysBot 4 (T|C|B|F) | Open | 2024-12-14, 15:54:28 | Bunnypranav | 2024-12-23, 12:49:51 | Primefac | 2024-12-23, 12:39:11 |
C1MM-bot 3 (T|C|B|F) | Open | 2024-12-12, 04:42:12 | DreamRimmer | 2024-12-23, 11:04:13 | Never edited by BAG | n/a |
BunnysBot 2 (T|C|B|F) | Open: BAG assistance requested! | 2024-11-23, 12:59:57 | Bunnypranav | 2024-12-27, 14:38:22 | Primefac | 2024-12-16, 17:43:58 |
MacaroniPizzaHotDog Bot (T|C|B|F) | On hold | 2024-10-28, 20:59:48 | MacaroniPizzaHotDog | 2024-11-14, 23:19:26 | SD0001 | 2024-11-14, 16:08:09 |
RustyBot 2 (T|C|B|F) | On hold | 2024-09-15, 15:17:54 | Primefac | 2024-11-17, 21:46:04 | Primefac | 2024-11-17, 21:46:04 |
PonoRoboT 2 (T|C|B|F) | On hold | 2024-07-20, 23:38:17 | Primefac | 2024-08-04, 23:49:03 | Primefac | 2024-08-04, 23:49:03 |
CanonNiBot 1 (T|C|B|F) | In trial | 2024-12-17, 12:50:01 | Primefac | 2024-12-23, 12:35:47 | Primefac | 2024-12-23, 12:35:47 |
Ow0castBot (T|C|B|F) | In trial | 2024-11-14, 01:51:38 | Usernamekiran | 2024-12-05, 00:18:38 | Primefac | 2024-12-01, 20:39:29 |
Platybot (T|C|B|F) | In trial: User response needed! | 2024-07-08, 08:52:05 | Primefac | 2024-12-23, 12:43:22 | Primefac | 2024-12-23, 12:43:22 |
KiranBOT 10 (T|C|B|F) | On hold | 2024-09-07, 13:04:48 | Xaosflux | 2024-11-26, 00:52:08 | Xaosflux | 2024-11-26, 00:52:08 |
SodiumBot 2 (T|C|B|F) | In trial: User response needed! | 2024-07-16, 20:03:26 | Sohom Datta | 2024-12-26, 14:22:10 | Primefac | 2024-12-23, 12:44:24 |
AussieBot 1 (T|C|B|F) | Extended trial: User response needed! | 2023-03-22, 01:57:36 | Hawkeye7 | 2024-12-23, 20:12:37 | Primefac | 2024-12-23, 12:46:59 |
KiranBOT 12 (T|C|B|F) | Trial complete | 2024-09-24, 15:59:32 | Usernamekiran | 2024-12-17, 17:53:18 | The Earwig | 2024-12-13, 21:52:39 |
PrimeBOT 46 (T|C|B|F) | Trial complete | 2024-12-09, 16:41:23 | Primefac | 2024-12-12, 13:37:17 | SD0001 | 2024-12-12, 06:05:18 |
Current requests for approvals
MinunBot
MinunBot isd a bot operated by me, Minun used for the purpose of delivering newsletters, any important Misplaced Pages news or any other templates that need to be delivered to multiple users. It is NOT however intended to be used for delivering RFA notices, or notices about particular articles (unless severe), only for WikiProjects or places that effect a wide area of articles and users, and have at least 10 users —Minun 19:35, 3 August 2006 (UTC)
- Is this to be an opened ended delivery service, or dealing with specific topics? — xaosflux 23:58, 3 August 2006 (UTC)
- It would be an opened ending delivery service, there will be an approval page where users can request to have MinunBot deliver newsletters or other importanr notices, cheers —Minun 12:49, 4 August 2006 (UTC)
- Im about to improve the user page, so check it when its done, cheers —Minun 11:20, 4 August 2006 (UTC)
- It would be an opened ending delivery service, there will be an approval page where users can request to have MinunBot deliver newsletters or other importanr notices, cheers —Minun 12:49, 4 August 2006 (UTC)
- Hm. I don't see that there is any harm in having something akin to what NotificationBot did (I've actaully got spambot code sitting around for pywikipedia) but there needs to be tight control on what is done with this; users shouldn't be able to just spam anything, it needs to be a list of users who have signed up for something or a specific delivery. I'd be a lot more comfortable if you had a project lined up already that we could give trial approval for and then review the progress. Essjay (Talk) 15:01, 4 August 2006 (UTC)
- Minun is facing a one year ban from Misplaced Pages any day now Misplaced Pages:Requests for arbitration/Iloveminun/Proposed decision. Regardless of the need or benefit of the bot, someone else will have to run it. Thatcher131 (talk) 15:04, 4 August 2006 (UTC)
- Im not going to get banned, because I stopped doing these things, the last arbitrator knows hat, so if he supports (which he might not), I can give proof, so i've got little to worry about right now.
- Minun is facing a one year ban from Misplaced Pages any day now Misplaced Pages:Requests for arbitration/Iloveminun/Proposed decision. Regardless of the need or benefit of the bot, someone else will have to run it. Thatcher131 (talk) 15:04, 4 August 2006 (UTC)
- Hm. I don't see that there is any harm in having something akin to what NotificationBot did (I've actaully got spambot code sitting around for pywikipedia) but there needs to be tight control on what is done with this; users shouldn't be able to just spam anything, it needs to be a list of users who have signed up for something or a specific delivery. I'd be a lot more comfortable if you had a project lined up already that we could give trial approval for and then review the progress. Essjay (Talk) 15:01, 4 August 2006 (UTC)
- Anyway, replying to the above comment, there is going to be a page where users must request for MinunBot to be used, and I (Minun) will either approve or disapprove of the request. If its approved, MinunBot will start delivering, but its not really spam, but anyway, thanks for your message, cheers —Minun 15:08, 4 August 2006 (UTC)
- I could maybe try it with Esperanza (if the admin general wants a bot to do it) as a trial, cheers —Minun 15:09, 4 August 2006 (UTC)
- Anyway, replying to the above comment, there is going to be a page where users must request for MinunBot to be used, and I (Minun) will either approve or disapprove of the request. If its approved, MinunBot will start delivering, but its not really spam, but anyway, thanks for your message, cheers —Minun 15:08, 4 August 2006 (UTC)
- Strong opposition to this, on the grounds cited by Thatcher. Unless Minum knows something the rest of us don't, he's two "close" votes away from remedies passing 6-0 consisting of four bans summing to a year, two paroles, and a probation. Add to this a remedy which begins: "Minun shall choose one account and edit only under that account." I urge that any approval of this bot be held over until any bans are served (or the case is closed without any being applied), and Minum requests clarification from the Arbcom as to whether the "one account" remedy is intended to also preclude a bot account. Alai 17:32, 4 August 2006 (UTC)
User:Eubot
I have used this account to add templates to Dutch municipalities, and later to add articles about Dutch villages and towns. Now I want to create articles for Italian municipalities (see the example at User:Eubot/Moretta). Until now, the bot has run without a bot flag without any complaints. Because there are a lot of Italian municipalities to be added, I would like my bot to have a bot flag now. Eugène van der Pijll 21:13, 1 August 2006 (UTC)
- Looks good, out of interest, where is the information from and what software do you use? Martin 21:16, 1 August 2006 (UTC)
- The information ultimately comes from istat, the Italian statistical institute. I have not yet downloaded that data, however; these same articles have been added to the Dutch wikipedia recently, and I have asked the bot owner over there if he can provide all of the data in an easy-to-use format. I'll be using a Perl script, written completely by myself. Eugène van der Pijll 21:23, 1 August 2006 (UTC)
- Would you mind providing a link to that institute website and tell us what license it uses for those datas? --WinHunter 02:19, 2 August 2006 (UTC)
- I've been told the data on Italian municipalities has been uploaded to the Italian wikipedia: it:Misplaced Pages:Data/Comuni. I will probably use those files as source. The copyright page of Istat says: Dati ed analisi dell’Istituto nazionale di statistica possono essere scaricati, utilizzati e pubblicati a condizione di citarne la fonte. ("Data... can be used... on the condition of citing the source.")
- This kind of factual/statistical data is ineligible for copyright in the United States, though not necessarily in other countries. Dragons flight 22:11, 2 August 2006 (UTC)
- I've been told the data on Italian municipalities has been uploaded to the Italian wikipedia: it:Misplaced Pages:Data/Comuni. I will probably use those files as source. The copyright page of Istat says: Dati ed analisi dell’Istituto nazionale di statistica possono essere scaricati, utilizzati e pubblicati a condizione di citarne la fonte. ("Data... can be used... on the condition of citing the source.")
- Would you mind providing a link to that institute website and tell us what license it uses for those datas? --WinHunter 02:19, 2 August 2006 (UTC)
- The information ultimately comes from istat, the Italian statistical institute. I have not yet downloaded that data, however; these same articles have been added to the Dutch wikipedia recently, and I have asked the bot owner over there if he can provide all of the data in an easy-to-use format. I'll be using a Perl script, written completely by myself. Eugène van der Pijll 21:23, 1 August 2006 (UTC)
- Seems good (as one would hope, given the name) on the basis of user-space drafts. Can you clarify under what circumstances the bot will be replacing existing articles, as opposed to adding new ones? And can you also give estimates of the number of articles to be created (or modified) on a per-region or per-province basis? Thanks. Alai 17:40, 4 August 2006 (UTC)
User:BigtopBot
This bot is a proposed bot that I'm operating to clean up vandalism, just like the AntiVandalBot does. Vandalism continues to increase rapidly, and using the existing vandal-fighting bots as of now won't fight every vandalism. AntiVandalBot, Tawkerbot2 and Tawkerbot4 (examples) can't do all their work, so I should need them help.
- About vandalism: Vandalism is any addition, deletion, or change to content, made in a deliberate attempt to compromise the integrity of the encyclopedia. Vandalism here in Misplaced Pages may include, but not limited to: replacement of existing text with obscenities, page blanking, or the insertion of bad jokes or other nonsense. It's usually easy to spot, especially AntiVandalBot.
- Purposes: This bot is designated to fight most of the vandalism above. The bot should fight about 80-90% of the vandalism described above. (Remember that bots are not perfect!) I'm going to operate this bot as scheduled to run, below, so I'm making this bot as an automatically-scheduled bot.
- Scheduling: Scheduling is critical - I have to pick my days and times to operate this. I'm trying to have this bot operate from 8:00 AM PT to 9:00 PM PT Monday through Thursday, from 12:00 PM PT to 9:00 PM PT Friday, 12:00 PM PT to 12:00 AM PT Saturday, and 9:00 AM to 10:00 PM Sunday. (Remember that PT is 8 hours back from UTC!)
- Language: The language it will run is the same old plain English. However, it will use pywikipedia framework to operate efficiently. Software is important to run a bot.
- Why do I need it: I need a bot because I've been using VandalProof, but sometimes, I don't get a chance to revert an edit. Using a bot gets me more chances to edit. When Cyde's AntiVandalBot is not around, I can use my bot to fight vandalism. When AntiVandalBot is around, I can use my bot to help him.
- Importance: It's highly important I should use a bot that fights vandalism. Reverting vandalism is like cleaning up junk - but in Misplaced Pages, it's electronic. Bots should be convenient and easy to use because they're automatic, just like Roomba robotic floorvac which cleans up dirt in the carpet.
- Please support me to approve this bot and to make it work! --Bigtop 02:13, 31 July 2006 (UTC)
Some questions:
- What do you mean by "...sometimes, I don't get a chance to revert an edit...", is it that someone else reverted it, or you ignored the vandalism?
- What algorithyms will you be using to determine vandalism?
- Will the bot do anything other than revert edits?
Thanks! — xaosflux 03:06, 31 July 2006 (UTC)
Answers:
- This means that somebody must have reverted an edit.
- What's an algorighym?
- I think that's it. I wanted to have the bot revert edits (and especially) warn the editor who edited the page as a vandalism edit.
Please contact me if you need help. --Bigtop 16:27, 31 July 2006 (UTC)
- An algorithym is a technique for analyzing a given set of data...in your case, new changes to Misplaced Pages. Specifically, we would like to know how your bot decides what a vandalistic edit is. (If I blanked my talk page, for instance, would I be reverted and warned?). alphaChimp 18:00, 31 July 2006 (UTC)
About the algorithym: This user will revert common kinds of vandalism. He will revert these kinds of vandalism, and he will warn the editor who did the vandalism. Common kinds such as adding/replacing random words as well as adding inappropriate images and blanking can be reverted by this bot. --Bigtop 19:36, 31 July 2006 (UTC)
- How will the bot tell if I am adding/replacing random words? Also, how can it tell that an image is innapropriate?alphaChimp 21:28, 1 August 2006 (UTC)
I'll try to copy AntiVandalBot's bad words list to my bot so I can work the same way as the AntiVandalBot does. --Bigtop 20:38, 3 August 2006 (UTC)
- How will the bot be able to tell the difference between, for example, a legitimate usage of "fuck" and a purely vandalistic usage? alphaChimp 14:22, 4 August 2006 (UTC)
- Strong reservations about this until a much more full and clear description of the algorithm is provided. The comment about "English and pywikipedia" is far from illuminating: pywikipedia doesn't cover this, and English doesn't provide much of a bot framework, nice though that would be. Does actual software to do this actually exist? Positively opposed to an index expurgatorius of "bad words" as the basis of automatic reversion. Alai 15:56, 4 August 2006 (UTC)
CyroBot
This is a bot account, belonging to CyRoXX and is based on the Pywikipedia framework.
In the English version of Misplaced Pages it will be used to correct interwiki links. For this task, I run it in "autonomous mode". No critical edits should be expected from this mode, so I only supervise its interwiki link actions from time to time. When I check the intwerwiki links in the german articles once and find something to correct in en:, the possibility to correct it myself via bot saves time and mainly ressources, because you don't have to wait for another bot in the English version of Misplaced Pages.
At the moment, CyroBot is a registered bot on de: where he also performs other tasks, primarily text replacements. As I'm not that much online here, I ask you to inform me here when bot status is given (or not). Thanks. --CyRoXX 09:41, 30 July 2006 (UTC)
- What kind of link corrections does it make. Does it ever add links?Voice-of-All 01:11, 31 July 2006 (UTC)
I need diffs on this one -- Tawker 01:14, 31 July 2006 (UTC)
User:Dark Shikari Bot
- What: The purpose of this bot is to do WikiProject maintenance for whatever WikiProject thinks its services are useful. WikiProject Anime and Manga will be the first beneficiary if this bot is approved. Of course, all its actions will have to be approved by any WikiProject before it is used on that WikiProject, as while in many cases they may be very useful, some WikiProjects may have different sorting systems that this bot should not mess with.
- Why: I have noticed that WikiProjects run into a few problems in terms of raw sorting. First of all, there are often a huge number of stubs (hundreds or thousands) that belong in the category of a WikiProject (with the appropriate template in the discussion page) but have not been included. A bot could easily do this. There are other similar things, of course, that the bot could also do.
- Exactly What:
- Put all "anime and manga stubs" not in WikiProject Anime and Manga into Wikiproject Anime and Manga as Stub-Class.
- If WikiProject Anime and Manga lists a stub as anything but stub-class, remove the stub tag as long as the article is more than 1000 characters long. If it is shorter, do nothing, as it could be an actual stub. The 1000-character cutoff isn't an arbitrary cutoff, but simply a safety feature to avoid a really short article that was accidentally listed as a non-stub in the article rating scale to stop being called a stub.
- For all projects with their own rating system, turn all GA-class articles in those projects that aren't listed as GA or better into GA-class articles under that project (this will start with WikiProject Anime and Manga only).
- It uses PyWikipedia Framework. It won't be a server hog because it will be run manually (i.e. only when I tell it to), so instead of patrolling recent changes for hours on end it will simply patrol the categories when told to, probably weekly or every few days.
- There are thousands of articles here that need proper sorting, and a bot in my opinion is the best way to do it. In addition, if there is some sort of mistake (i.e. a B-Class article that's currently a stub that gets unlisted as a stub when it shouldn't be), it isn't devastating: a 99% success rate would add far more good than it would bad, and I highly doubt it would have any real failures, unless some fool ran around rating Stubs as B-Class articles. Dark Shikari 10:26, 18 July 2006 (UTC)
- I don't see a problem, as long as it's being sponsored by a Wikiproject, and the exact details of what it is going to do are approved here in advance. We don't generally give a "whatever you need it to do" clearance; you tell us specifically what it will do, and we approve that. If you add something, you drop a note here saying what you're adding, and we say yea or nay. I see no problem why it can't have a trial period once it's set up and has a specific list of tasks. Essjay (Talk) 00:41, 19 July 2006 (UTC)
- Its not really "sponsored" just yet: I'm asking what people think of it on the WikiProject talk page. So far I've received no disapproval--they've also made suggestions as to what I can and can't do in regards to sorting using the bot. You can find the discussion so far here. How much approval should I look to get before I set the bot into motion? Dark Shikari 01:09, 19 July 2006 (UTC)
- Dear God no!!!! You don't seem to realise that stubs and Stub-Class articles are two completely different things! The terminology is admittedly extremely confusing (and the sooner something is done about it, the better). Also you clearly don't understand that length is only a minor consideration when it comes to working out what a stub is. An article swith one line of text followed by a list of 50 examples, or one line of text followed by a large table, is definitely a stub and likely well over 1000 characters. This is why stubs are sorted by hand, rather than by some automated method. Having the bot run in this way could well reverse much of the work of the Stub sorting wikiproject. Grutness...wha? 01:29, 19 July 2006 (UTC)
- (Just for clarity) Hence why I said "as long as it's being sponsored by a Wikiproject, and the exact details of what it is going to do are approved here in advance." My take would be that the Wikiproject people would be quick enough to know what should and shouldn't be done, and that as long as specific changes are set out here before they are done, there is plenty of time for approval. Just saying that for the sake of clarity on my earlier comments. Essjay (Talk) 01:42, 19 July 2006 (UTC)
I have similar concerns to Grutness. The whole concept of a "stub class article" is poorly defined at best: at a deletion discussion on one such category, one WP1.0ist argued to keep them separate specifically on the grounds that "Stub class" is not the same as "stub"; another wanted to keep, on the basis that they were essentially the same. This needs to be much more clearly defined before a bot goes around making sweeping changes based on an assumption one way or the other. I see no evidence of support for this at the indicated Wikiproject, and the following comment sounds like opposition, or at least a reservation, to me: "It probably should go case-by-case (which I guess isn't what you want to hear for a bot, huh)." I'm especially opposed to stub-tag removal by bot; if a B-grade article is still tagged as a stub, there's clearly been a snafu someplace, since appropriate tagging and categorisation should be required to get it to that standard: much better to detect these by category intersection sorts of report generation, and have someone fix them manually. Stub-tagging might be more reasonable, but again, it's been specifically claimed by one person that some "Stub class" articles are not stubs, so this would require clarification and refinement; it would also have to be done very carefully to make sure that the "Stub class" tag, and the "stub" tag have the same scope by topic, and that no more specific tag applied instead. (For example, doing this to apply a single stub type to the whole of WPJ Albums, or WPJ Military History, would be a disaster.) Alai 04:06, 19 July 2006 (UTC)
- I hope nobody minds my ill-informed opinion, but to incorporate Alai's suggestion into the bot sounds like a benefit. Without doing any editing, the bot could see how many articles have been snafu'd in such a manner and generate a report to somebody's userpage. It almost sounds like this should be a generic bot that Project leaders could download and configure for their specific purposes, or you could keep it proprietary and write all those little sub-bots yourself. ;) Xaxafrad 04:20, 19 July 2006 (UTC)
- Well I had been informed by many that stub-class articles and stubs were in fact the same thing, and that anything above a stub-class article could not possibly be a stub. I had also been told that anything under the category "anime and manga stubs" most certainly belongs in Wikiproject Anime and Manga as a stub-class article. So who's right? I'm now getting confused... Dark Shikari 09:48, 19 July 2006 (UTC)
- And in addition, wouldn't the most recent assessment of an article always be trusted? It seems as if most of the stub categorization was made months or even years ago, and hundreds of articles are still marked as stubs that have expanded far more since then, and have much higher ratings within the WikiProject. Are you saying that WikiProject ratings are totally useless and cannot be used to justify removing a stub tag? The WikiProject members would disagree. Dark Shikari 10:00, 19 July 2006 (UTC)
- Oh, and also, is there anything wrong with simply having the bot shove all the Anime and Manga Stubs into the Wikiproject? There's a huge number that aren't part of the WikiProject, and nobody seems to have a problem with assigning them to the Project. Dark Shikari 13:03, 19 July 2006 (UTC)
- If "Stub class article" and "stub" really are the same, then we're back with my original concerns, i.e., why have two sets of categories for the same thing, thereby duplicating work, and causing exactly the sort of inconsistency described? See for example this discussion, in which the matter is made... well, rather opaque, actually. One claim seems to be that a "long but useless" article would be "Stub class", but not a "stub" (though personally I would say that a textually long article can still be a stub, so even that's far from clear either way). It's possible one implies the other, but not vice versa, for example. No, I'm not saying assessment ratings are totally useless: are you saying stub tags are totally useless? Members of WP:WSS would disagree. An automated rule that simply assumes which of the two is incorrect is highly problematic, either way. An assessment that ignores the presence of a stub tag is either a) using different criteria for the two, or b) failing to leave the article in a state consistent with said assessment; a person has made them inconsistent, and a person (same or otherwise) should make a judgement as to how to make them consistent. People are stub-tagging things all the time, I see no reliable basis for assuming that the assessment rating is either more reliable, or more recent. I have no objection to the application of WPJ-tags by bot, if the associated WPJ has expressly agreed to the basis this is being done on, in their particular case. Alai 14:58, 19 July 2006 (UTC)
Thanks for the input, Alai. I've made a revised set of things the bot could do:
- Put all "anime and manga stubs" not in WikiProject Anime and Manga into Wikiproject Anime and Manga as Stub-Class. There's no way for the bot to figure out if they deserve a higher rating than Stub-Class, so its fair to start them off there.
- For all projects with their own rating system, turn all GA-class articles in those projects that aren't listed as GA or better into GA-class articles under that project (this will start with WikiProject Anime and Manga only). Do the same with FAC articles that aren't listed as FACs under the WikiProject system. Dark Shikari 15:15, 19 July 2006 (UTC)
- That seems completely fine, no objections to those tasks. Obviously in each case consultation with the wikiproject, and confirmation with them of the scope of the articles they're "adopting" thereby would be indicated. Alai 02:59, 21 July 2006 (UTC)
- Seems fine to me as long as there is agrement at the project.Voice-of-All 17:14, 29 July 2006 (UTC)
- Before I run the bot, I want to see what Wikipedians think of another idea I had in mind (possibly a second thing the bot can do). How about a routine that checks through Misplaced Pages articles and fixes redirects: i.e. if an article links to "Blue", when the actual article is "Blue (colour)" and there is a redirect between the two (not a disambig), the original article will be changed to link to "Blue (colour)." The appearance of the page will remain unchanged: only the link in the code will be changed. This would probably slightly lower the load on Wikimedia servers by lowering the number of redirects. In addition, its always struck me as somewhat odd to have articles link to redirects, as redirects make more sense as methods of catching various user-inputted article names and sending them to the right article. This especially applies to older articles that link to a main article which has now been moved. What do you all think? Dark Shikari /contribs 19:41, 31 July 2006 (UTC)
- Doesn't seem such a good idea to me. There's often been complaints about "needless" edits to eliminate redirects, and the opinion has been expressed that they cause more load issues than they solve, due to edits being much (much, much) more expensive than page fetches. Alai 17:36, 4 August 2006 (UTC)
Requests to add a task to an already-approved bot
User:Drinibot redux
Another task. When trying to work on the "Now commons" backlog, I see some images need to be moved (for instance Image:Glucose.png (at en:) is a dupe for Image:Glucose Haworth.png at commons.
So I'm requesting approval to run image.py to orphan local copies of images I manually check to be the same and to run nowcommons.py to clear the dupes.
I'll be running in test mode today so people can see what I?m doing. -- Drini 21:50, 2 August 2006 (UTC)
- Will this bot be replacing free images with less-free images? — xaosflux 03:02, 3 August 2006 (UTC)
- Ok, approved for a very limited trial (say 10 edits or so) - then post some diffs and we'll take a look -- Tawker 06:27, 3 August 2006 (UTC)
- NO It will be replacing images with EXACT COPIES of them found at commons. I just pointed and example. The Glucose example was not good, since once I removed the local copy, it turned out that it was shadowing an image from commons, so they look different (the bot replaced local en:Image:Glucose.png with the EXACT COPY commons:Image:Glucose Haworth.png I'll point some other example
- HEre are some diffs:
- Ok, approved for a very limited trial (say 10 edits or so) - then post some diffs and we'll take a look -- Tawker 06:27, 3 August 2006 (UTC)
- It is not enough, at least for me, to replace images with duplicate images if the description pages are not similar. In the glucose example the commons description page is not in English, making it less useful to our users. Dragons flight 19:19, 3 August 2006 (UTC)
- Dragonsflight: Yes, but I'll do that manually if nowcommons.py doesn't automatically do, but first I need to run image.py. REmember Glucose.png was not a good example, since commons:Image:Glucose.png was NOT the same as en:Image:Glucose.png you need to compare commons:Image:Glucose Haworth.png with the deleted image at en:Image:Glucose.png and you'll se the descriptions were similar. In other words, the bots won't do all the work in order to clear the backlog, but they will lend me a hand moving around stuff so I just need to check and copy the important data manually -- Drini 19:33, 3 August 2006 (UTC)
Sorry about that mess, it was a bad example. Think of the grapes one, I haven't changed the description, I?m doing it at the moment. -- Drini 19:23, 3 August 2006 (UTC)
such cleaning up would allow to use this image commons:Image:Grapes.jpg (commons:Grape.jpg) on en: articles:, which cna't be done at the moment since it's shadowed by the local duplicate -- Drini 19:37, 3 August 2006 (UTC)
Here's another example. Bot would change articles linking to Image:Hamburger.jpg to Image:Hamburger sandwich.jpg in order to remove the local copy (which is a duplicate) and blocks the usage of commons:Image:Hambuger.jpg which cna't be used at the moment due to the unnecesary duplicate that is shadowing it. Again, I will manually check and update descriptions so information os preserved. -- Drini 20:33, 3 August 2006 (UTC)
- Support giving Drinibot this capability. I took a look at the Grapes.jpg and since it had been a feautured article it was all over the place. Bastique▼ voir 20:45, 3 August 2006 (UTC)
- Support since Drini is checking all of the images manually anyway for license/duplicate nature. The bot is good for replacing images that occur on many pages.Voice-of-All 20:51, 3 August 2006 (UTC)
There are basically two steps to this, relinking the images and merging the descriptions. I take it you are going to be doing the later in a manually assisted fashion? In most cases, image description pages are fairly simple and easily merged. In a few cases though, description pages can be complicated and multilingual. In general, I don't support eliminating the local page if the result is an image description that is much harder for our users to understand. For example, having a page with large blocks of text in several language is worse than keeping the local copy, in my opinion. Do you plan to verify that a description merge is reasonable before running your relinking script? And if so, what criteria will you use? Similarly, will your efforts adhere to the requirements of the Commons CSD? This includes verifying that the NowCommons tag is at least a week old, that the image description page contains no specific requests to preserve a seperate local copy, and that all freely licensed versions of the file have been made available in the version history of the Commons page? I notice that you apparently did not meet the last criteria in moving the grapes image. Dragons flight 21:06, 3 August 2006 (UTC)
- I wasn't the one moving the grapes image, it was already there, but yes, the last step will be done manually, checkign licenses and requirements, it's changing articles so they point at the commons image instead of the local one the step I need the bot. I'm aware in some cases it's not desired (as in, main page articles point to a protected local copy), so I'm not touching those cases. -- Drini 21:18, 3 August 2006 (UTC)
- And, in any case, the bot would not be deleting pages as the bot runs without priviledges, that is, what I?m asking for approval here is not for a bot to remove images or pages, is for it to change articles so they point to the commons image instead of the local copy. The bot would only retarget articles (since they are exact copies, articles won't change appearance either) -- Drini 21:28, 3 August 2006 (UTC)
- I don't have a problem with this as long as you agree that doing the relinking requires keeping up with and appropriately managing the description pages as well. I consider the processes intrinsically linked in this request since I wouldn't want to see a bot blindly shifting image links to Commons if it resulted in description page content being lost or mangled. Dragons flight 21:35, 3 August 2006 (UTC)
- Ok how about I vow to relink to the ocmmons images , making sure the descriptions are preserved or merged into the commons one ? (again, bot would not be deleting stuff in any case) -- Drini 23:07, 3 August 2006 (UTC)
- This is sounding good, my biggest concern is that we don't end up with a less free licences, such as replacing a public domain image with a ccbysa image. If this is being checked, no objection on that point. — xaosflux 00:05, 4 August 2006 (UTC)
User:DumbBOT, fifth function
Some articles and images are tagged {{copyvio}} but are then not listed at WP:CP. I'd like approval for User:DumbBOT to perform the second step for these incomplete nominations. This is done by loading Category:Possible copyright violations, waiting 120 seconds (to give time to complete the noms), loading WP:CP, and comparing the list of articles/images in the category and the list of articles/images linked from the copyright main page. The resulting lists are then posted as in and . (Nothing is posted if the list to post is empty.) I'd run this bot manually. (Liberatore, 2006). 12:50, 1 August 2006 (UTC)
- How often would this be run and could the wait be incresed to 300 seconds?Geni 14:39, 1 August 2006 (UTC)
- As far as I can see, there is probably no need to run it more than once in two/three days (I am not planning to run it scheduled.) I could increase the delay to 300, or also to 600 secs; there is no problem with this. (Liberatore, 2006). 14:50, 1 August 2006 (UTC)
- 600 would be overkill I think 5 minutes is enough.Geni 15:15, 1 August 2006 (UTC)
DFBot
I am working on developing a system for monitoring key administrative and editting related categories for the purpose of automatically identifying backlogs and other areas in need of attention.
An output prototype can be seen at User:Dragons flight/Category tracker.
The largest categories, those with 600+ entries, are scraped from Special:Mostlinkedcategories. The small ones are gotten by directly loading the category pages.
When completed, there will be a control page at User:Dragons flight/Category tracker/Config for adjusting which pages are tracked, how often they are monitored, etc.
Dragons flight 06:07, 31 July 2006 (UTC)
- How often will this run? — xaosflux 00:44, 2 August 2006 (UTC)
- Well, one could make an argument that there are a few categories (e.g. Wikipedians looking for help, Requests for unblock, Misplaced Pages protected edit requests) where it might make sense to monitor them for changes several times per hour, but for the most part I was thinking from several time per day to once every few days. Special:Mostlinkedcategories only updates twice a week, so there is no point hitting that more often. I'm not actually sure how often would be most useful, which is why I was planning to build in some flexible control. Any thoughts? Dragons flight 02:14, 2 August 2006 (UTC)
Alphachimpbot Task 4, CIA
This request is per discussion on Misplaced Pages:Bot requests. The CIA recently updated their site to force a secure connection (https) instead of (http). All Misplaced Pages links to the CIA's site using the old http format redirect to an fun error page. I'm proposing running AWB to replace instances of http with https on cia links. By my search, there are only a little over 100 pages on Misplaced Pages with the old version. An example of the bot in action (manually) can be found here. alphaChimp 22:05, 29 July 2006 (UTC)
- Correction: By this search, there are more than 850 for the world factbook, one of the primary things that wikipedia is linking to on the CIA site. alphaChimp 22:07, 29 July 2006 (UTC)
- Just wanted to say thanks to alphachimp for his quick action to my original request in the Bot Request section. Have a good one :) Bsheppard 01:17, 31 July 2006 (UTC)
- Anybody? alphaChimp 05:22, 2 August 2006 (UTC)
- A simple search-and-replace? Feel free. --Carnildo 18:26, 4 August 2006 (UTC)
- Anybody? alphaChimp 05:22, 2 August 2006 (UTC)
- Just wanted to say thanks to alphachimp for his quick action to my original request in the Bot Request section. Have a good one :) Bsheppard 01:17, 31 July 2006 (UTC)
Grafikbot, MILHIST article tagging
I would like Grafikbot (talk · contribs) to tag military history related articles with the {{WPMILHIST}} template for assessement purposes as defined in the WP:1.0 program.
A complete automatic tagging bot is still out of reach, so for the time being, a limited tagging will be executed as follows:
- Every article with {{Mil-hist-stub}} stub tag and related tag (list available at WSS) as well as with {{mil-stub}} and below (list available at WSS) is considered as a military history article and thus subject to tagging.
- The list from Misplaced Pages:WikiProject Military history/New articles will also be processed.
- The talk page of each article is tagged with the {{WPMILHIST}} prepended to the talk page (even if the talk is empty).
- The run is repeated say one or two times a month to make sure that new stubs get properly tagged.
Note: a rather lengthy debate took place on WP:AN a few weeks ago, and a consensus emerged that such a tagging was desirable for the whole WP project. Obviously, a bot can't tag everything, but I think it just can handle this one. :)
Can someone approve this please? :)
Thanks, Grafikm 15:09, 21 July 2006 (UTC)
- This somewhat touches on some of the same issues as discussed in relation to Dark Shikari Bot, but I see no problem as such. The scope does seem rather wide, though: {{mil-hist-stub}} obviously makes intuitive sense, but does the wikiproject really want to "adopt" the whole of {{mil-stub}} (i.e. corresponding to the whole of Category:Military)? Perhaps when you're approved for a trial run, you might start off with just the former, and consult somewhat on the latter. Alai 04:48, 23 July 2006 (UTC)
- Well, aside from fictional stuff (which people really shouldn't be using {{mil-stub}} for anyways, I would think), we've already adopted basically all of Category:Military already. Kirill Lokshin 04:54, 23 July 2006 (UTC)
- Then it's not very well-named, is it? (BTW, you rather said the opposite when a "Stub-Class articles" category was being discussed for deletion, that there was no single hierarchy to your WPJ's scope...) At any rate, if there's any scope whatsoever for "false positives", it's not the best place to start. Alai 05:10, 23 July 2006 (UTC)
- Well, no; the project's scope is actually broader than merely what's in Category:Military ;-) As far as false positives, I don't know how best to handle that. (What's the danger of having a few extra articles tagged, though? These are only talk page tags, and tend to be removed from articles where they don't belong with minimal fuss.) Kirill Lokshin 05:14, 23 July 2006 (UTC)
- Then it's not very well-named, is it? (BTW, you rather said the opposite when a "Stub-Class articles" category was being discussed for deletion, that there was no single hierarchy to your WPJ's scope...) At any rate, if there's any scope whatsoever for "false positives", it's not the best place to start. Alai 05:10, 23 July 2006 (UTC)
- Well, aside from fictional stuff (which people really shouldn't be using {{mil-stub}} for anyways, I would think), we've already adopted basically all of Category:Military already. Kirill Lokshin 04:54, 23 July 2006 (UTC)
For your collective information, I note that there are 11,216 articles in or under {{mil-stub}}. That even misses a few, since for some incomprehensible reason, aircraft are largely split by decade, rather than into military and non-. There's 3,374 that are specifically "military history". Let's be as sure as possible these are all within the Wikiproject's scope before getting carried away with this (especially the "non-historical" ones). Alai 07:40, 25 July 2006 (UTC)
- Well, I am leaning against a bot. I have just been cleaning up some 75 articles in cartridge category that had the WPMILHIST banner on the talk page that I assume were done by a bot. If the article was read it clearly stated that it was used for sporting or hunting with no military referenced at all. Article should be read and assessed at the same time.--Oldwildbill 10:45, 25 July 2006 (UTC)
- There are no tagging bot here that I'm aware of, at least not today. -- Grafikm 17:00, 25 July 2006 (UTC)
I just took a quick run through the various stubs involved here. The only one in which I found articles not within the project's scope is {{weapon-stub}} and its children, as there are some hunting-related articles there. Hence, we should not automate tagging for that tree. As far as I can tell, however, all of the other children of {{mil-stub}} are reasonably safe to tag by bot. Kirill Lokshin 04:00, 26 July 2006 (UTC)
Bots in a trial period
User:JAnD
Please, give the bot flag to this bot. It is used for adding interwiki links with start on . This bot has this flag on cs, is based on pywikipedia. All requested details are on it's userpage. Thanks, JAn Dudík 19:49, 29 July 2006 (UTC)
- For reference, the userpage contains the following content: "This is unregistered manual controlled interwiki bot for years, days, cities and names (things, what are similar or mostly similar in different languages. Bot is controlled by ]". I'm not really familiar with the approvals of interwiki linking bots, but you're not really giving us that much info about the bot. Can you provide some more clarification? alphaChimp 20:08, 29 July 2006 (UTC)
- Which details?
- Bot's platform: pywikipedia, interwiki.py
- Periodicity: always manual started on cs:, usually for one category or 10-20 pages.
- There are many missing articles about czech towns on cs:, which have their article on en:. There is also many articles about years, where is on cs only link to en: but not backlink from en: or other languages.
- My usual contribution on en is adding cs:. When do it I, why con't do it bot?
- JAn Dudík 10:34, 30 July 2006 (UTC)
- Your bot appears to be doing a lot of removals of interwiki links, what criteria is used for that? — xaosflux 13:58, 30 July 2006 (UTC)
- It seems to only be removing Japanese interwiki links. All of them are dead anyway. Maybe when it follows all the interwikis and automatically removes a link when it finds a blank page. Is that what it does Jan? alphaChimp 14:07, 30 July 2006 (UTC)
- Looks to be removing live links to, I don't read the other languages though, so can't tell what it is delinking. — xaosflux 14:13, 30 July 2006 (UTC)
- I couldn't find any removals of live ones. Everyone was seemed like the generic "Create a Page" wikipedia page (like you see here). Of course, I can't read Japanese, so I just assume the page meant what I thought it did. alphaChimp 14:28, 30 July 2006 (UTC)
- Please check these edits, and reply with the removal reasons:
- Thanks, — xaosflux 14:43, 30 July 2006 (UTC)
- Yes, you are right, when bot find dead link, automaticaly removes it.
- ad1: on fr: is disambiguation, on other langs not
- ad2: sl: do not exist, others are Town and villages (another category)
- ad3: there are two gigues, the removed are'nt the same
- ad4: there are two words: memetics and mem
- all these changes were human assisted.
- JAn Dudík 06:54, 31 July 2006 (UTC)
- So, when and if this bot is running, will it be human assisted as it was during your test (in other words, will it be performing the type of removals Xaosflux was asking about above)? alphaChimp 15:50, 31 July 2006 (UTC)
- Interwiki bot can be running automatically, so when find two links for one page it will do nothing, when find non existing link he will add new links, in both cases write it to log on owners computer.
- When is running with assisting, he asks every time when found more links to any languages which is the correct, and before removing link asks too.
- JAn Dudík 11:27, 1 August 2006 (UTC)
- Thanks for the replies, one week trial approved, please post difs and any comments here during run/when complete. — xaosflux 00:30, 2 August 2006 (UTC)
EssjayBot III
EssjayBot_III (talk · contribs · deleted contribs · nuke contribs · logs · filter log · block user · block log)
With the success of EssjayBot II archiving project and project talk pages, I've gotten several requests for it to do user talk pages. I'd prefer not to mix bot functions, just for my own peace of mind, and I didn't include user talk pages in my initial request, so I'm making a new request for this function. I personally am fine with doing it, or fine with not doing it; I'm just trying to provide those functions the people desire. Essjay (Talk) 10:08, 1 August 2006 (UTC)
- Sure sure sure, trials approved, post some difs, etc...
- How often will this script run, and how many pages will it deal with per interval? Having a bot archive hundreds of pages per interval could be a bit of a resource hog. This bot should definantley be flagged when approved. — xaosflux 00:28, 2 August 2006 (UTC)
- Sure sure sure, trials approved, post some difs, etc...
- It'd run no more than once a day, and I'm generally very careful to make sure things run at times when other bots generally don't (for example, the archiving of ANI is done at 0:05 UTC, since most bots run on the hour). Since it's done by crontab entry, it can be done with whatever frequency; additionally, I can set them up load balanced if need be, putting three or four in one file, then three or four an hour later, etc. So far, I've had a handful of people ask; I have no idea how many Werdnabot was doing, but I really have no intention of getting into archiving the user talk pages of half of Misplaced Pages. Essjay (Talk) 03:17, 2 August 2006 (UTC)
CrazynasBot
I would like to expand my AWB run bot to have it to subst user talk warnings in that namespace, per WP:Subst. I have been running it in for a while in manual, with no problems so far. Crazynas 02:37, 31 July 2006 (UTC)
- You do realize that we already have at least 4 bots (mine included) doing this, right? alphaChimp 02:48, 31 July 2006 (UTC)
- I also realize that when I loaded up my script in AWB there were around 60 pages to subst... if you have ideas for other things I can do, I'd be more then happy to listen. :). Crazynas 02:55, 31 July 2006 (UTC)
- Haha. Nah it's no problem. You can always check the Misplaced Pages:Bot requests page. Quite frankly, I don't think there's anything particularly wrong with having another bot subst'ing usertalk messages. Xaosflux - you agree? alphaChimp 03:28, 31 July 2006 (UTC)
- No problem having more of these, just makes for shorter runs, risk of bot collision is minimal at this. 1 week trial approved, post difs when complete. — xaosflux 02:15, 1 August 2006 (UTC)
- Haha. Nah it's no problem. You can always check the Misplaced Pages:Bot requests page. Quite frankly, I don't think there's anything particularly wrong with having another bot subst'ing usertalk messages. Xaosflux - you agree? alphaChimp 03:28, 31 July 2006 (UTC)
- I also realize that when I loaded up my script in AWB there were around 60 pages to subst... if you have ideas for other things I can do, I'd be more then happy to listen. :). Crazynas 02:55, 31 July 2006 (UTC)
BetacommandBot expansion of task
I would like to have the bot automatically tag catagorys older than 5 days with ((db-catempty}} that remain unused after five days Betacommand 05:20, 29 July 2006 (UTC)
- How would it do that? What would the bot run in? How frequently would you run it? αChimp 16:41, 29 July 2006 (UTC)
I will get a dump of all articles in Special:unused categorys and wait five days at that time i will get another dump and compare them manualy, any category that remains on the list is subject to deletion after only four days on the list. once i have the list of old empty cats i will then run the bot in AWB to tag each article with {{db-catempty}} per the deletion policy. I plan to run it no more than once a day, less as the number of empty cats goes down. currently there is 3926 empty catagories. Betacommand 17:26, 29 July 2006 (UTC)
- An initial influx of 4000 CSD's would be a strain on anyone working CAT:CSD, can this be set to only flag new loney categories for CSD? — xaosflux 22:46, 29 July 2006 (UTC)
- Does unused cat's include categories that are "empty" of pages, but have subcats? — xaosflux 22:46, 29 July 2006 (UTC)
I plan to use Special:Unusedcategories which states The following category pages exist although no other article or category make use of them. . as for the comment about flooding CSD i have a solution. create a category something to the effect of Category:Categories that have been empty for moer than five days and put a link to it on csd, so that the list can be accessed without flooding CSD and i will also maintain a record of the data I use at User:BetacommandBot/oldCategories — Preceding unsigned comment added by Betacommand (talk • contribs)
- One comment about the current 3926 empties. I'm sure that a lot of them will remain empty, but is that the number of categories empty after 4 days, or is that just the total empty right now? (those two stats might actually be different) alphaChimp 05:18, 30 July 2006 (UTC)
- right now. update(3914) Betacommand 06:59, 30 July 2006 (UTC)
- Using a holding category sounds good for the initial run, I've got no objection to using {{db-catempty}} tagging for the rest. Can you implement a check to prevent them from getting tagged on subsequent runs though (e.g. if in holding cat SKIP) ? — xaosflux 03:14, 31 July 2006 (UTC)
- already built in but thanks for the sugestion Betacommand 03:29, 31 July 2006 (UTC)
- Would a trial run be possible without having to deal with the 3926 cats first? — xaosflux 03:31, 31 July 2006 (UTC)
- already built in but thanks for the sugestion Betacommand 03:29, 31 July 2006 (UTC)
I plan to log them under User:BetacommandBot/oldCategories and slowly release the first large CSD over a period of time into the speedy. i was also thinking instead of {{db-catempty}} i create a similar template and put all Cats into it and place a link on WP:CSD so that they can handle the large number of cats as they get time without flooding CSD. Betacommand 03:45, 31 July 2006 (UTC) PS. kind of like a Backlog Category wich i will slowly release into CSD Betacommand 03:47, 31 July 2006 (UTC)
- Trial period approved. Please limit the initial run to no more than 500 categories. If possible have the trial include some empty cats created after the trial starts. — xaosflux 03:59, 31 July 2006 (UTC)
- Trial run in progress please see User:BetacommandBot/oldCategories for full record and list of pages Betacommand 20:27, 1 August 2006 (UTC)
- Can the per page logs check a master or prior page days to not regnerate so many hits per page? — xaosflux 00:47, 2 August 2006 (UTC)
- Trial run bloody well unapproved. He set it to tag for deletion hundreds and hundreds of categories that, whilst empty, are key elements in series of categories (like buildings by year, for years we don't have a buliding for yet). I've just sat and reverted them, and I don't have a bot. Much greater thought needs to be applied before approving expansions like this. Please; would this page stop handing approvals out for pointless tasks that don't need doing, and that need doing with a modicum of human judgement? Having witless bots do things merely for the sake of not having them idle is irritating at best. And this particular bot has already had one run of other edits (substs) reverted for also being wrong, and pointless. -Splash - tk 01:16, 2 August 2006 (UTC)
- We do run short trials to see what may happen and require further review before just letting bots go. I'm not sure why we really need empty cats such as Category:5.56mm machine guns (one of the cats tagged). WP:CSD#1 does state that empty categories are speediable, and according to your statements above these may not even fall in to the old categories that may have contained articles before, requiring further investigation. If consensus is that we should have all of these empty categories, how did we end up with the speedy criteria? Regardless, due to this complaint, trials relating to editing the categories themselves are suspended, but ones that are preparing lists are not. Additionally, there has got to be a better way to keep track of the gathering then categorizing the categories. This bot has been blocked by Splash, pending a response. — xaosflux 02:29, 2 August 2006 (UTC)
- Well no, there were some that did seem a bit arbitrary, but I wasn't going to manually review each of 500ish categories when ~99% of them needed reverting. I do not think there is necessarily a "consensus" somewhere I can point to about the categories, but it does seem fairly obvious to me (and others at User talk:Kbdank71#Bot-tagging of unused categories) that obliterating large chunks of sequenced categories, just because they happen to be empty is wrong. Empty categories should be deleted when they risk confusion or duplication etc; that's what the CSD is for, not for indiscriminate application. Also, just because something meets a CSD does not mean it is required to be deleted, only that it may be.
- We do run short trials to see what may happen and require further review before just letting bots go. I'm not sure why we really need empty cats such as Category:5.56mm machine guns (one of the cats tagged). WP:CSD#1 does state that empty categories are speediable, and according to your statements above these may not even fall in to the old categories that may have contained articles before, requiring further investigation. If consensus is that we should have all of these empty categories, how did we end up with the speedy criteria? Regardless, due to this complaint, trials relating to editing the categories themselves are suspended, but ones that are preparing lists are not. Additionally, there has got to be a better way to keep track of the gathering then categorizing the categories. This bot has been blocked by Splash, pending a response. — xaosflux 02:29, 2 August 2006 (UTC)
- Xaosflux, to answer your question, yes, I'm fine of course with someone unblocking the bot once its owner appreciates the difference in the manner it needs to be operated. Not merely asking here and getting an out-of-context nod before embarking on major-sized operations like this without checking very thoroughly, in advance that they make sense. WP:BOLD does not need to apply to bots. That said, I do not view the unblock as at all time-critical; the action of this bot fall well below the 'critical' threshold and in some cases well below the 'useful' threshold.
- On a broader note, I'd like it if runs of 500 hundred edits were not the trial size. These have to be reverted by patient humans, and the whole point of a trial is to save time if things go belly up. I would think trials of no more than 50, with well announced trialling would be very much more appropriate. -Splash - tk 02:51, 2 August 2006 (UTC)
- Responce:
- A: My bot did not tag anything for deletion.
- B: All it did was state that the category was subject to the old category rule WP:CSD#1.
- C: If someone lets me know about a mistake i can have the bot revert all of the edits quickly without the overwork to humans.
- D: It has been brought to my attention that some cats need to be kept. I would like assistance in creating a list of cats that are key elements in series of categories. i will use thwm as an exclusion
- E: My bot did only 300-350 edits reguarding this subject
- F: If at any point there is concern about my bot leave a message on its talkpage and it will stop editing until i have reviewed it and have resolved the issue
- Responce:
Betacommand 05:19, 2 August 2006 (UTC) PS: will work according to xaosflux Regardless, due to this complaint, trials relating to editing the categories themselves are suspended, but ones that are preparing lists are not. Betacommand 06:13, 2 August 2006 (UTC)
- A, B: It amounted to tagging for speedy deletion.
- E: That's still a factor of 10 too many for a trial run.
- F: That is a useful feature that you should document on the bot's user page. -Splash - tk 19:33, 2 August 2006 (UTC)
- What it is key to take away from this, Betacommand, is that you must check with relevant projects and processes and people before undertaking sweeping actions with a bot that are not precedented. Not before, to my knowledge have categories been mass eliminated under CSD C1, and so CfD should have been contacted before going ahead. This approvals page is evidently rather below-par for making sure of the utility and appropriateness of the editing the bot will undertake, really only making sure it is technically sound and possible. It is your responsibility to do the necessary homework. -Splash - tk 19:33, 2 August 2006 (UTC)
- I think that this bot is a good idea. If someone where create the category, you expect it to have at least one item. Keeping them provides no navagationial value. --Shane 05:49, 2 August 2006 (UTC)
- Well obviously sequential years have navigational value if I'm a newbie trying to work out what categories I can use without realising I can make them (particularly if I'm an anon and prohibited from creating pages at all). -Splash - tk 19:33, 2 August 2006 (UTC)
- After receiving confirmation that this function is disabled for the time being (logs will still be written for debug purposes, but the bot will not make live edits related to this), I've unblocked the bot. However, if the bot is making any funny edits, feel free to reblock again. Titoxd 06:07, 2 August 2006 (UTC)
I see no problem with this task. However, if certian categories are "required" or whatever, why not tag them with a template to that effect? Bots can detect the template and ignore it, and it will give humans the same cue. --Chris (talk) 02:13, 3 August 2006 (UTC)
- I agree with Chris on this, a template would be a good idea, that makes the bot easily future-compatible, which is important for maintenance of series categories. I think that the task is a good idea, as long as some sort of exceptions mechanism is implemented, and all the needed categories are tagged before the bot runs.--digital_me 02:59, 3 August 2006 (UTC)
User:Splash I object to your to how you have handled this situation you are going against Misplaced Pages policy (empty categories older than four days are subject to deletion), and you inability to read.
This page meets Misplaced Pages's criteria for speedy deletion. It is a category that has no items whatsoever in it, it has been empty for at least four days (CSD C1).Please Remove this notice if at any point this category is no longer empty
is how I marked the categories ‘‘‘NEVER’’’ did I list it for deletion or attempt to speedy it, all the bot did at this point was state 'This page meets Misplaced Pages's criteria for speedy deletion'. Regarding the fact that the categories are key elements in series of categories show some proof that there are guidelines to keep them and that they are exempt from WP:CSD#1. Also please show some consensus about keeping them that has more than four editors. That is NOT a consensus on Misplaced Pages. I am operating per Misplaced Pages:deletion policy. Please show me some guideline or policy that exist to back up your personal opinion, and the uncalled for hostile bordering on rude behavior you have show in this discussion. Betacommand 19:11, 3 August 2006 (UTC)
- The CSDs are things that may be speedily deleted if an admin agrees with you. They are not things that must be. It does strike me that perhaps you don't agree that you should check with relevant projects, people and processes before deploying your bot. That's a little disappointing. -Splash - tk 23:44, 3 August 2006 (UTC)
- I was planning to work with CSD to implament this task once i was completely comfortable with the logging and taging of old categories by the bot. you blocked my bot before i could do this. if you will look above i was thinking about putting a link on WP:CSD as a solution but i was planning to discuss this with CSD before nominating anything for deletion. As per above (my previous post) i am still waiting for the answers to the questions about your actions and the what policy that you were using for a guideline for reverting the tags and blocking of my bot. where is there a discussion about keeping them. and the other qusetions that i have rasied Betacommand 04:22, 4 August 2006 (UTC)
Ok, I am approving this bot on a trial run under the following conditions
- This trial is no more than 50 edits
- The bot will not tag w/ any tag that will add items to CAT:CSD, use a seperate sub-category like Tawkerbot on old user talk pages did.
- If it's a subcategory something should show to that extent (unless someone has a diff to prove otherwise) - if there is nothing there (say the category page is blank) for goodness sakes tag it!
-- Tawker 05:07, 4 August 2006 (UTC)
- done, please see Category:Categories that have been empty for more than four days and are subject to deletion Betacommand 07:02, 4 August 2006 (UTC)
- Nice…and you can see straight off the bat that those "A-class…" categories are each part of a set and should be tagged for exemption somehow. No mistake, this is a useful tool for rootling out categories which have gone unused and unloved, it just needs some mechanism for detecting those which are lying in wait for articles to populate them as part of an ongoing process.
One thing though: it would have been nice if the bot had spelt "Catagories" correctly HTH HAND —Phil | Talk 08:25, 4 August 2006 (UTC)
- Nice…and you can see straight off the bat that those "A-class…" categories are each part of a set and should be tagged for exemption somehow. No mistake, this is a useful tool for rootling out categories which have gone unused and unloved, it just needs some mechanism for detecting those which are lying in wait for articles to populate them as part of an ongoing process.
- all A-Class articles are now excluced if there are any others that someone can identify i will add them to the exclude list thank you for your input. Betacommand 17:44, 4 August 2006 (UTC)
- done, please see Category:Categories that have been empty for more than four days and are subject to deletion Betacommand 07:02, 4 August 2006 (UTC)
User:VoABot II
This bot is basically part of what VoABot did, but I decided to split the functions up. This bot watches certain pages and reverts edits using regexps denoted as blacklisted. It can check for added content, summaries, and logged-out IP ranges, depending on what is needed to stop recurring AOL/Shared IP vandals. Its been running for a while as part of VoABot, and I'd just like to have this second account flagged, since it can sometimes make a lot of edits in a fairly short time period (though nothing like Rambot). Here are the pages that it watches (it updates the list hourly).Voice-of-All 03:23, 28 July 2006 (UTC)
- While I support the idea that these functions should be from separate accounts, I'm missing where the reversion function of VoABot was approved, the original request does not mention this "feature". What are the parameters and other tests used to prevent bad reverts here? The page for this bot also states that it has already been approved here??? — xaosflux 04:08, 28 July 2006 (UTC)
- Also just saw edits like these, that don't look like obvious vandalism, with no discussion going on on the talk page. — xaosflux 04:19, 28 July 2006 (UTC)
- VoABot's page says that its functions are approved, but not VoABot II. VoABot IIs functions were tested as part of VoABot for a while. Also, it does not try to find general vandalism, as TB2 and the like already due. It goes after edits typical to specific banned users or AOL attacks. The edits you were looking at were by a revolving IP adding the same consensus-rejected spam links for weeks. That and Gibriltarian-like stuff is what this reveerts. It uses the standard rollback method (revert to last non-X contrib, check if X is actually the last contrib) and whitelists admins.Voice-of-All 05:05, 28 July 2006 (UTC)
- Removed note (assuming now it was a copy and paste of old bot page.) Wouldn't sprotecting the page be more effictive then revert warring with an anon though? — xaosflux 05:15, 28 July 2006 (UTC)
- Usually they just give up after a while. Also, they may just make throwaway accounts to get through, which does little if the edits are reverted promptly. Additionally, the AOL/shared IP RC patrol has stopped several trolling streaks to many randomn pages, something sprotection is useless against. Also, the note you removed was a legend, all the checked features where approved (VoABot II had no checked feautres).Voice-of-All 05:22, 28 July 2006 (UTC)
- Voice of All is careful and intelligent. I think he would make a great bot. —Centrx→talk • 20:01, 29 July 2006 (UTC)
- Edits such as this one seem to violate WP:BITE and WP:AGF, and are being made without so much as a talk message, with a edit summary that the edit is RESTRICTED. — xaosflux 22:35, 29 July 2006 (UTC)
- I've changed the regexps "from previous vandal streaks" so it wont revert that edit again (or any edit with 7 or so periods in a row). I've modified the edit summaries to include "vandalism" and "spam links". Also, what should the edit summary be changed to? Should it notify all non-AOL shared IP users (as notifying AOL users is pointless as the adress changes)?Voice-of-All 22:59, 29 July 2006 (UTC)
- You may want to check with the Tawkerbot team, they have a notification sytem built in to their reverts, as for the edit summary, telling someone that their editing is restricted goes against the "...that anyone can edit", without any forwarning (like {{sprotect}} does. What some other have dne is create a subpage on your bot, and have the edit summary link to it, where the page explains to the user why they were reverted (generically of course) and how they should avoid getting reverted again (assuming good faith, and not requiring that they get an account). — xaosflux 23:14, 29 July 2006 (UTC)
- Subpages, good idea. I'd work on those now. I'd like to get the RC patrol for this bot on again asap, as AOL vandal streaks can really come hard sometimes.Voice-of-All 23:26, 29 July 2006 (UTC)
- OK, I started the subpage, and the edit summaries now link there.Voice-of-All 00:48, 30 July 2006 (UTC)
- You may want to check with the Tawkerbot team, they have a notification sytem built in to their reverts, as for the edit summary, telling someone that their editing is restricted goes against the "...that anyone can edit", without any forwarning (like {{sprotect}} does. What some other have dne is create a subpage on your bot, and have the edit summary link to it, where the page explains to the user why they were reverted (generically of course) and how they should avoid getting reverted again (assuming good faith, and not requiring that they get an account). — xaosflux 23:14, 29 July 2006 (UTC)
- I've changed the regexps "from previous vandal streaks" so it wont revert that edit again (or any edit with 7 or so periods in a row). I've modified the edit summaries to include "vandalism" and "spam links". Also, what should the edit summary be changed to? Should it notify all non-AOL shared IP users (as notifying AOL users is pointless as the adress changes)?Voice-of-All 22:59, 29 July 2006 (UTC)
- What little I've seen of this bot, it's done a very good job and stopping IP hopping aol vandalism.(Where a vandalbot hops all over the place on a subrange vandalising). I like the basic idea. Though xoasflux is probably correct about how socialable it is, its technical capabilities seem quite accurate. You should fix it up a little, and it will be a nice compliment to what the tawkerbots do. Kevin_b_er 01:15, 30 July 2006 (UTC)
- One week trial approved, please maintain a record of any complaints (except those from obvious vandals), and link to them (if any) along with some difs here. Don't get discouraged if someone blocks you along the way, even the tawkerbots got blocked at the start of their runs. — xaosflux 14:19, 30 July 2006 (UTC)
- Too many times I might add (well, Tawkerbot2 got the worst of it, Tawkerbot4 only got a user block methinks) -- Tawker 05:15, 31 July 2006 (UTC)
SmackBot task approval VI
Task: To perform other formatting fixes to ISBNs. (SB already removes ":" after the "ISBN".) The additional functionality for example
- Delinks ISBN where it's followed by an ISBN
- Debolds/deitlicises "ISBN" where it's followed by an ISBN
- Replaces blanks in ISBNs with "-".
Method: Automatically using AWB or otherwise.
Speed: 1-4 per minute.
Number: c. 100 per month.
Frequency: Approx every data dump
Duration: About an hour
Testing: A one-off cleanup has been done
Rich Farmbrough 06:30 26 July 2006 (GMT).
- One week (up to a month if needed) trial approved, restrict the trial of this new fucntion to 100 edits, and post results when trial complete. — xaosflux 16:31, 29 July 2006 (UTC)
- Cheers. Rich Farmbrough 13:37 3 August 2006 (GMT).
User:Botdotcom
- What: Updating links associated with templates for The German Solution to userboxes using AWB.
- Maintainer: nathanrdotcom. — Nathan / 23:57, 26 July 2006 (UTC)
- Per conversation on talk pages, have added to the AWB check page (non-automated) so you can start doing any initial setup. Await a reply from the approvals group to start testing though. — xaosflux 02:24, 27 July 2006 (UTC)
- Besides templates in "what transcludes here", do you plan on doing any other replacements? — xaosflux 02:24, 27 July 2006 (UTC)
- If there's anything else that needs doing, I'll re-list it here (under additional tasks) and specify as well as mention it on the bot's userpage. It's only going to be used for userboxes, as far as I can think of. — Nathan / 02:26, 27 July 2006 (UTC)
- Trial period approved for one week trial, keep the edits down to no more than 2-3/min to keep recent changes useable; and keep it under 1000 repalcements during the trial. — xaosflux 01:50, 28 July 2006 (UTC)
Alphachimpbot, 3rd Task:WP:GUS
I'd like to start using my bot to migrate userboxes found in ] to userspace, per the instructions on that page and Jimbo's comments. Specifically, the bot would find and replace the templates with their userspace version. It would run using AWB in bot mode, and would be applied to all namespaces. I would run the bot 1 userbox at a time. After completing the migration of a userbox, it would be tagged as CSD G4. Admittedly, there could be some resentment from those whose Userpages are modified, but nothing (appearance-wise) should actually change, asides from the target of the transclusion. αChimp 05:50, 26 July 2006 (UTC)
- As long as the replacements are identical, the users seem to be pretty much ok, Fluxbot just did a 2500+ run yesterday, and only one question. That being said, the more people wanting to work this list, the better (it's a temporary project afterall). See my talk page for a matrix of some suggestiosn I got from others regarding syntax. — xaosflux 23:51, 26 July 2006 (UTC)
- Right. I actually saw that your bot was doing it, and saw your request for others to help. I just know that there are users out there that will get upset that someone is modifying their userpage, in any way shape or form (even if it's to preserve original formatting). Anyway, I'd like some good old approval here. Anyone? ;) αChimp 05:08, 27 July 2006 (UTC)
- Ah, I see that you just got into the BAG. I guess that means I can start trialing it? αChimp 05:23, 27 July 2006 (UTC)
- One week trial approved, as you already have a bot flag on this, please run with a AWB bot wait of 10seconds or more, and keep the trial run under 1000 edits. — xaosflux 01:30, 28 July 2006 (UTC)
- This is going to make me seem like a bumbling idiot, but I misinterpretted your comment above to be approving the run. I've already got the 1000 edits (at 12 second intervals actually). Sorry about that. You can check the contributions and approve or reject or extend or whatever. Once again, sorry about that. αChimp 01:41, 28 July 2006 (UTC)
- No problem, keep trialing, but hold off on any more mass use (maybe another 500 or so) of this to give a change for community input here or to your talk(s). — xaosflux 01:47, 28 July 2006 (UTC)
- All done. Three responses, nothing significant (one accused me of vandalism...but never responded when I explained, another brought up a legitimate concern regarding evaluation of capitalization in replacement...which was implemented, another was a compliment from an admin). Tell me what you want me to do. αChimp 17:36, 29 July 2006 (UTC)
- Post a few difs here, give it a few days for anyone to notice anything they want to complain about. — xaosflux 02:17, 1 August 2006 (UTC)
- All done. Three responses, nothing significant (one accused me of vandalism...but never responded when I explained, another brought up a legitimate concern regarding evaluation of capitalization in replacement...which was implemented, another was a compliment from an admin). Tell me what you want me to do. αChimp 17:36, 29 July 2006 (UTC)
- No problem, keep trialing, but hold off on any more mass use (maybe another 500 or so) of this to give a change for community input here or to your talk(s). — xaosflux 01:47, 28 July 2006 (UTC)
- This is going to make me seem like a bumbling idiot, but I misinterpretted your comment above to be approving the run. I've already got the 1000 edits (at 12 second intervals actually). Sorry about that. You can check the contributions and approve or reject or extend or whatever. Once again, sorry about that. αChimp 01:41, 28 July 2006 (UTC)
- One week trial approved, as you already have a bot flag on this, please run with a AWB bot wait of 10seconds or more, and keep the trial run under 1000 edits. — xaosflux 01:30, 28 July 2006 (UTC)
- Ah, I see that you just got into the BAG. I guess that means I can start trialing it? αChimp 05:23, 27 July 2006 (UTC)
- Right. I actually saw that your bot was doing it, and saw your request for others to help. I just know that there are users out there that will get upset that someone is modifying their userpage, in any way shape or form (even if it's to preserve original formatting). Anyway, I'd like some good old approval here. Anyone? ;) αChimp 05:08, 27 July 2006 (UTC)
Sample diffs: . That's pretty much the gist of it. Still no complaints. alphaChimp 04:37, 1 August 2006 (UTC)
User:BOTepho
- What: orphaning (by linking in most cases) fair use images outside of ns:0
- How: Based off lists generated by sql queries and reviewed to exclude some potentially ok pages (Portals there is some debate over, Misplaced Pages:Today's featured article and its subpages, Misplaced Pages:Recent additions, etc) using replace.py
- How often: Probably in batches of ~500 pages based on the type of image (such as {{albumcover}}) list of images to remove list of pages to edit. With 10747 images and 19947 pages it may take a while still. Once this group is done, updates will depend on frequency of database dumps and/or whenever the toolserver works again and I can wrangle someone into running a report/get an account.
Kotepho 09:19, 12 July 2006 (UTC)
- This one looks like it's going to be a magnet for complaints with people who don't understand image use policy but it does sound necessary. I'd start with a very well written our FAQ page and leave a talk page message on their page saying what the bot did and why it did it before I would run/approve it -- Tawker 21:36, 15 July 2006 (UTC)
- Durin's page is quite good. Kotepho 21:49, 16 July 2006 (UTC)
- Isn't this similar in at least some ways to what OrphanBot does? I'd like to hear Carnildo's comments on this, given that he runs OrphanBot and is on the approvals group. Essjay (Talk) 14:52, 16 July 2006 (UTC)
- It is basically the same role, the main reason I brought it up is looking at OrphanBot's talk over time it does have a lot of complaints, (I suspect its people not familiar with policy mostly howerver) - it's just something FRAC, I have no problems with the bot personally and I'll give it the green light for a trial run, I just wanted to make sure Kotepho knew what deep water (complaints wise) this is :) -- Tawker 18:52, 16 July 2006 (UTC)
- Well, someone has to do it, and I'd likely do at least some of them by hand so the complaints will come either way. Kotepho 21:49, 16 July 2006 (UTC)
- Ahh, what the heck, run it in trial mode and lets see what happens -- Tawker 07:24, 21 July 2006 (UTC)
- Well, someone has to do it, and I'd likely do at least some of them by hand so the complaints will come either way. Kotepho 21:49, 16 July 2006 (UTC)
- It is basically the same role, the main reason I brought it up is looking at OrphanBot's talk over time it does have a lot of complaints, (I suspect its people not familiar with policy mostly howerver) - it's just something FRAC, I have no problems with the bot personally and I'll give it the green light for a trial run, I just wanted to make sure Kotepho knew what deep water (complaints wise) this is :) -- Tawker 18:52, 16 July 2006 (UTC)
User:GurchBot
Between February and April this year, I made a large number of typo-fixing edits (approximately 12,000 in total). All of these were done manually – every edit was checked before saving – although I have written software similar to AutoWikiBrowser to assist with the process. This software is designed specifically for spellchecking and so, while not as flexible as AWB, has a number of advantages. It reports the changes made in the edit summary, can check articles very quickly (in less than a second), and can easily switch between different corrections (for example, "ther" could be "there", "the" or "other") in a way that AWB cannot. Central to this is a list of over 5000 common errors that I have compiled from various sources, including our own list of common misspellings, the AutoCorrect function of Microsoft Office, other users' AWB settings, and various additions of my own. As I mentioned, I have done an extensive amount of editing with the aid of this software, using my main account. I have recently made further improvements to the software; over the last couple of days I have made a few edits to test these improvements, and I am now satisfied that everything works.
While I believe Misplaced Pages is now so heavily used that (a) no one person could hog the servers even if they wanted to, and (b) the Recent Changes page is more or less unusable anyway, a couple of users have expressed concerns about the speed of these edits (which reached 10 per minute during quiet periods). Most notably, Simetrical raised the issue during my RfA. As I stated in my response to his question, I was not making any spellchecking edits at that time, but I explained that I would request bot approval should I decide to make high-speed edits in the future. That time has now come; I have created User:GurchBot, and I request permission to resume exactly what I was doing in April, but under a separate account. I will leave the question of whether a bot flag is necessary to you; I am not concerned one way or the other.
Thanks – Gurch 19:45, 15 July 2006 (UTC)
- As long as you are checking it yourself and ignoring the "sic"s, it seems good to me. Alphachimp 23:54, 15 July 2006 (UTC)
- Yes, I check every edit before I save it, and I ignore when I see it. I have incorrectly fixed a couple of s in the past because I (the falliable human) failed to spot them; one of my improvements has been to add -detecting to the software so it can alert me to this, and hopefully make an error less likely in future – Gurch 10:03, 16 July 2006 (UTC)
- I don't have any issue with this, provided you aren't doing any of the spelling corrections that tend to cause problems, such as changes from Commonwealth English to American English and visa versa. As long as it's only correcting spelling errors and doesn't touch spelling variations, it should be fine. I'd like to see a week's trial (which is standard) to get a good idea of exactly what will be taking place, and also for users to add their comments. A week's trial is approved, please report back this time next week. Essjay (Talk) 14:47, 16 July 2006 (UTC)
- I have never corrected spelling variations, regional or otherwise – being from the UK, I have long since given up and accepted all variants as equally permissible anyway. If you wish, I can upload the entire list and replace the (now out-of-date) User:Gurch/Reports/Spelling; I will probably do this at some point anyway. I won't be around in a week's time, so you can expect to hear from me in a month or so. For now, you can take this to be representative of what I will be doing – Gurch 16:11, 16 July 2006 (UTC)
- Just wanted to make sure I'd said it. ;) A month is fine; we normally do a week's trial, but I have no issues with something longer. Let us know how things are going this time next month. Essjay (Talk) 22:22, 16 July 2006 (UTC)
If these are manually-approved edits, I wouldn't think approval as a bot would be strictly necessary, though I could imagine the speed might be a concern, especially if errors are (or were) slipping through. Given that this is more of a "semi-bot", I suggest it not be bot-flagged, so as to reduce the likelihood of errors going undetected subsequently as well. Alai 04:24, 18 July 2006 (UTC)
- In fact approval as a bot wasn't necessary – as I mentioned above, I used to do this using my main account, and would have continued to do so, except that a number of users expressed their concern and suggested I request approval for a bot. So I have done that. I freely admit that errors will inevitably slip through at some point; in fact, I've just had to apologize for correcting a British spelling which was, fortunately, spotted and reverted very quickly. Of course I never intended to do any such thing – it turns out that this (actually correct) spelling has been listed on Misplaced Pages:Lists of common misspellings/For machines (one of the sources for my correction list) since November 2002; somehow it was never spotted in nearly four years. My fault, of course, for assuming the list was correct; I'm now scrutinizing my list thoroughly to avoid repeating this mishap. This is the first time I've made such a miscorrection, the reason being that my old list was constructed by hand, whereas I've now tried to expand it (and so catch more errors with each edit) by including lists from other sources. In the past I have occasionally mis-corrected "sic"s and errors in direct quotations; the chance of this should be much lower now that my software can detect these itself, even if I miss them. Based on what I have done to date, though, I reckon my error rate is about 1 in every 1000 edits, which I can live with – Gurch 11:38, 18 July 2006 (UTC)
- As I said above, you're cleared for a month-long (instead of a week, at your request) trial; check back with us then and we'll set the bot flag. Essjay (Talk) 00:47, 19 July 2006 (UTC)
Concern The list of common mispellings is utter shit, please do NOT use it. It replaces many words that are actually words. --mboverload@ 20:34, 28 July 2006 (UTC)
Robthbot
Proposed disambiguation bot, manually assisted, running m:Solve_disambiguation.py. I will be using this to work on the backlog at Misplaced Pages:Disambiguation pages with links; bot assisted disambiguation is substantially more efficient than any other method. The bot will run from the time it gets approval into the foreseeable future. --RobthCleanup? 16:20, 13 June 2006 (UTC)
- I see no reason we couldn't have a trial run, at least. robchurch | talk 20:44, 1 July 2006 (UTC)
- Thanks. I'll start running it at low speed in the next couple of days. --Robth 04:04, 2 July 2006 (UTC)
- (In response to a request for a progress report): I've made a small run, which went quite well, but limits on my disposable time have prevented me from making any larger runs just yet--Robth 01:07, 10 July 2006 (UTC)
- Thanks. I'll start running it at low speed in the next couple of days. --Robth 04:04, 2 July 2006 (UTC)
- No problem, trial extended, keep us informed and report back when you have enough done for us to make a final decision. Essjay (Talk • Connect) 08:24, 12 July 2006 (UTC)
Approved
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:CbmBOT
- What: The purpose of this bot is to simply update the "Number of Articles Remaining" table on Category:Cleanup by month. The bot can be run manually, or scheduled to run automatically (it is currently not), but needs to run no more than once a day to keep the table updated to the scope that it is currently being done. The bot is a PHP 5.14 Command line interface, which operates on a Unix machine, and makes use of cURL and extensive regular expression parsing.
- Why: As the data for this table is constantly changing, it is rather tedious for a human to manually update the data. Instead, this bot will retrieve all the relevant information necessary, and update the section automatically. Given the fact that manual updates generally introduce errors (see update history), performing this task automatically is a better way.
- How: The bot uses cURL/regex to pull and parse a minimal number of pages – the Cleanup by month category pages, plus Category:Cleanup by month, Category:Music cleanup by month, and Special:Statistics. An average run of the bot, which takes about three–four minutes, pulls less than one hundred pages. As well, if, at any time, a page is pulled incorrectly (usually a result of a timeout, the bot will abort, pulling no further pages and making no changes to the category page.
On each category page, the line "There are ## pages in this section of this category." is parsed to determine how many pages are on that page of that category. As categories can span multiple pages, "(next 200)" links are also followed. Finally, the same is done for the subcategories of Category:Music cleanup by month. When all counts have been retrieved, the bot will pull the total number of articles in the English wikipedia from Special:Statistics, and then format a new wikitable for output into the article. It is also possible to configure the bot to output the wikitable to stdout, rather than edit the page, if necessary.
The bot keeps track of a number of statistics, including total number of pages processed, total time, etc. While functionality does not yet exist to do so, it would not be hard to extend the bot script to maintain these statistics on a subpage of the User page. —The preceding unsigned comment was added by Dvandersluis (talk • contribs) 20:43, 19 July 2006 (UTC)
- Ok, looks ok, can you make a trial run and post a diff please -- Tawker 07:24, 21 July 2006 (UTC)
- I'm not exactly sure what you're asking me to do... run the bot once? –Dvandersluis 03:31, 23 July 2006 (UTC)
- Pretty much, the trial run is for a week, and you can run the bot, carefully checking it's edits. During the test keep the edits to no more then 2-3 per min. After the run(s) post the difs here for review by the group/community. — xaosflux 00:55, 26 July 2006 (UTC)
- Oops, hadn't seen this earlier. Here are the diffs:
- Pretty much, the trial run is for a week, and you can run the bot, carefully checking it's edits. During the test keep the edits to no more then 2-3 per min. After the run(s) post the difs here for review by the group/community. — xaosflux 00:55, 26 July 2006 (UTC)
- I'm not exactly sure what you're asking me to do... run the bot once? –Dvandersluis 03:31, 23 July 2006 (UTC)
- Everything looks in order here, appears useful, and does not appear to be a resource hog. BOT APPROVED Does not appear to need a flag. — xaosflux 01:30, 4 August 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:Drinibot
In the past I've been using Drinibot to handle tasks related to TFD (substing or removing templates prior to deletion, changing templates, etc). Now I?d like to request permission for Drinibot to handle CFD taks as well.
So in detail, this is what I need drinibot to do.
- Keep the capitalization-redirect creation capability (used once in a while).
- Substing, remove or replacing templats (for tfd)
- Moving, renaming and emptying categories (for cfd)
-- Drini 19:48, 31 July 2006 (UTC)
- OK, go ahead --pgk 19:58, 31 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:Wherebot
I would like permission to run a bot to tag newly created copyvio articles with {{db-copyvio}} (although I would only tag with {{nothing}} and exit so I can look over the edit until I am confident in its accuracy in identifying copyvios). The bot is written in perl, although it calls replace.pl (from pywikimediabot). Once I work out the bugs, I would want to have the bot running continuously. -- Where 01:44, 12 July 2006 (UTC)
- How do you intend to gather the "newly created copyvio articles"? — xaosflux 03:00, 12 July 2006 (UTC)
- The bot watches on the RC feed at browne.wikimedia.org. Every new article is downloaded, and the text is run through a yahoo search to see if there are any matches outside of Misplaced Pages. -- Where 04:12, 12 July 2006 (UTC)
- But what if the text is a GFDL or PD source, or quotes a GFDL/PD source?--Konstable 04:52, 12 July 2006 (UTC)
- Also, how about fair use quotes? --WinHunter 05:59, 12 July 2006 (UTC)
- But what if the text is a GFDL or PD source, or quotes a GFDL/PD source?--Konstable 04:52, 12 July 2006 (UTC)
- The bot watches on the RC feed at browne.wikimedia.org. Every new article is downloaded, and the text is run through a yahoo search to see if there are any matches outside of Misplaced Pages. -- Where 04:12, 12 July 2006 (UTC)
- Wouldn't it be better to report potential copyvios (at an IRC channel, and at WP:AIV or a similar page for non-IRC folks) instead of just tagging them outright? Also, you could use Copyscape, similar to how the Spanish Misplaced Pages implemented this idea. Try talking to User:Orgullomoore for ideas. Titoxd 06:35, 12 July 2006 (UTC)
- Yes, I suppose since the bot is bound to have a large number of false detections of coyvios it would be best to report it in a way other than simply tagging articles for speedy deletion. I like Titoxd's idea of listing the possible copyvios on a page similar to AIV (later, perhaps, I can implement an IRC notification bot if this goes okay). I looked at copyscape, however, and it only will allow for 50 scans per month unless I pay them money, which I am not willing to do. Thanks for your time! -- Where 14:44, 12 July 2006 (UTC)
- Again, ask Orgullomoore. He runs more than just 50 scans a month, so you two might be able to work something out. Titoxd 05:31, 13 July 2006 (UTC)
- Yes, I suppose since the bot is bound to have a large number of false detections of coyvios it would be best to report it in a way other than simply tagging articles for speedy deletion. I like Titoxd's idea of listing the possible copyvios on a page similar to AIV (later, perhaps, I can implement an IRC notification bot if this goes okay). I looked at copyscape, however, and it only will allow for 50 scans per month unless I pay them money, which I am not willing to do. Thanks for your time! -- Where 14:44, 12 July 2006 (UTC)
- What would be best is if it put a notice on the talk page "This article might be a copyvio" and added that article to a daily list (in the bot's userspace) of suspected copyvios. Then humans could use their judgement to deal with them properly... overall I think it would speed things up tremendously, since we'd have all the likely copyvios in one place. It should probably avoid testing any phrases in quotation marks, but other than that, I don't think it would pick up a huge number of false positives. In my experience with newpage patrol, for every 99 copyvios there's maybe 1 article legitimately copied from a PD/GPL site. Like I said earlier, it's rather amazing that we don't have a bot doing this already, and I'm glad someone's developing it finally. Contact me if you need any non-programming help with testing. --W.marsh 21:30, 12 July 2006 (UTC)
- The problem with putting a notice on a talk page would be that it would create a large number of talk pages for deleted articles; that being said, if you still think it is a good idea, I will trust your judgement and implement it anyway once I am confident in the bot's accuracy. Also, just out of curiosity, what do you think is wrong with searching for exact phrases? (when I was not testing for exact phrases, the bot claimed that a page was a copyvio of a webpage that listed virtually every word in the English language). Thanks for your suggestions, and your time. -- Where 23:02, 12 July 2006 (UTC)
- Oh, you're probably right about the talkpages, I hadn't thought of that. For the other thing, I mean that it shouldn't search for phrases that were originally in quotation marks in the test article, since those are probably quotations that might be fair use. But it should definently search for other exact phrases from the article on Google/Yahoo whatever. By the way, I think Google limits you to 3,000 searches/day, Yahoo might too... not sure if that will have an impact. --W.marsh 23:09, 12 July 2006 (UTC)
- I got the impression that yahoo was more lenient than google. But if worse comes to worse, I will have to just use the web interface rather than the API (which should allow me unlimited searches). -- Where 23:31, 12 July 2006 (UTC)
- Oh, you're probably right about the talkpages, I hadn't thought of that. For the other thing, I mean that it shouldn't search for phrases that were originally in quotation marks in the test article, since those are probably quotations that might be fair use. But it should definently search for other exact phrases from the article on Google/Yahoo whatever. By the way, I think Google limits you to 3,000 searches/day, Yahoo might too... not sure if that will have an impact. --W.marsh 23:09, 12 July 2006 (UTC)
- This seems like a good idea, but the only concern I would have is that the process be supervised by a non-bot (i.e. human, hopefully). Tagging the talk page or on an IRC channel seems like a good idea; admins would simply have to remember to check those often and make sure that the bot is accurate. Thanks! Flcelloguy (A note?) 05:11, 13 July 2006 (UTC)
- I agree; the bot will have a fair amount of errors because of the concerns voiced above. Thus, the bot will edit only one page, which will be outside article space. This page would contain a listing of suspected copyvios found by the bot. During the trial period, I would set the bot to edit a page in my userspace; if the bot is successful, perhaps the page could be moved to the Misplaced Pages namespace. Does that address your concern? If not, I'm open to suggestions :) -- Where 18:05, 13 July 2006 (UTC)
- I like this idea in general. My only concern is that even with liberal filters it could create a massive, unmanageable backlog. Have you tried to estimate how many pages per day/week would this generate? Misza13 19:10, 13 July 2006 (UTC)
- I have not done so yet; however, based on tests so far, I would estimate that the backlog would be manageable. It is hard to tell for sure though, without a trial. Thus, I just started the bot so it commits to a file, and does not touch Misplaced Pages. When I finish this trial, I will be able to give an estimation of how many suspected copyvios it finds per day. -- Where 19:29, 13 July 2006 (UTC)
- I just did a 36 minute test, in which 4 potential copyvios were identified. If I did the calculatins correctly, this would mean that 160 potential copyvios would be identified on a daily basis (assuming that the rate of copyvios is constant, which is obviously not the case). This is a lot, but should be manageable (especially if A8 is amended). Also, I should be able to reduce the number of false identifications with time. Two of the items identified were were not copyvios; one was from a Wikiedia mirror, and I am still examining the cause of the other one. -- Where 21:53, 13 July 2006 (UTC)
- I have not done so yet; however, based on tests so far, I would estimate that the backlog would be manageable. It is hard to tell for sure though, without a trial. Thus, I just started the bot so it commits to a file, and does not touch Misplaced Pages. When I finish this trial, I will be able to give an estimation of how many suspected copyvios it finds per day. -- Where 19:29, 13 July 2006 (UTC)
- I like this idea in general. My only concern is that even with liberal filters it could create a massive, unmanageable backlog. Have you tried to estimate how many pages per day/week would this generate? Misza13 19:10, 13 July 2006 (UTC)
- Yes, having the bot edit one page and listed the alerts there would alleviate my concerns. The test is also quite interesting, though I would like to perhaps see a longer test - maybe 24 or 48 hours? 36 minutes may not be reliable data to efficiently estimate the daily output. Thanks! Flcelloguy (A note?) 23:56, 13 July 2006 (UTC)
- Okay; I am starting another test and will have it run overnight. -- Where 00:08, 14 July 2006 (UTC)
- Yes, having the bot edit one page and listed the alerts there would alleviate my concerns. The test is also quite interesting, though I would like to perhaps see a longer test - maybe 24 or 48 hours? 36 minutes may not be reliable data to efficiently estimate the daily output. Thanks! Flcelloguy (A note?) 23:56, 13 July 2006 (UTC)
The bot is currently listing possible copyvios to User:Where/cp as it finds them. -- Where 01:56, 15 July 2006 (UTC)
- Suggestion, could you change the listing format (see below)
- Thats the current format
- Suggested format above. — xaosflux 03:44, 15 July 2006 (UTC)
- Good idea! The bot now uses that format. Thanks! -- Where 15:14, 15 July 2006 (UTC)
- New format looks better, but of the 3 items listed on there right now, none are actionable, see comments per item on that page. — xaosflux 01:03, 16 July 2006 (UTC)
- Thanks :). I removed the items. -- Where 01:48, 16 July 2006 (UTC)
- New format looks better, but of the 3 items listed on there right now, none are actionable, see comments per item on that page. — xaosflux 01:03, 16 July 2006 (UTC)
Howdy. The bot has been running for a tad over a week. If anybody has any suggestions for improving the bot, I would be appreciative. Also, I am kind of curious how long the trial period lasts. Many thanks, -- Where 03:33, 26 July 2006 (UTC)
- Been looking pretty good, some false positives though, so think this should stick to listing on a copyvio patrol type page rather then tagging the articles (early request). The trial period usually lasts a week-whenever it's done :) Upon going live, do you have a project page this would go to, or would you keep it in your userspace?; if possible perhaps a "reported at ~~~~~" or a per day break might be helpful. This bot looks like it's being useful though, just have to get the results in front of some live editors. — xaosflux 03:58, 27 July 2006 (UTC)
- I agree completely that the bot most certainly cannot be trusted to directly tag articles (if I change my mind, I would come back here, but I don't think it is possible to get it at the desired level of acuracy). I would prefer to move User:Where/cp to Misplaced Pages:Suspected copyright violations, or something of that nature. The bot now includes timestamps, as you suggested. -- Where 02:37, 28 July 2006 (UTC)
- Bot approved, no bot flag though (edits are not high-speed, and the edit summaries may be useful on a watch list).— xaosflux 04:36, 28 July 2006 (UTC)
- Final suggestions on this page:If possible take the parts out about "remove this line" from the edit summaries, make the project page, and a /Reports subpage; transclude the subpage to the main page (that way people can watch list it serperatley if desired). Then get the word out (WP:AN, link on CAT:CSD, WP:VP, etc pages. — xaosflux 04:36, 28 July 2006 (UTC)
- Bot approved, no bot flag though (edits are not high-speed, and the edit summaries may be useful on a watch list).— xaosflux 04:36, 28 July 2006 (UTC)
- I agree completely that the bot most certainly cannot be trusted to directly tag articles (if I change my mind, I would come back here, but I don't think it is possible to get it at the desired level of acuracy). I would prefer to move User:Where/cp to Misplaced Pages:Suspected copyright violations, or something of that nature. The bot now includes timestamps, as you suggested. -- Where 02:37, 28 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
Approved, not flagged
EssjayBot II
EssjayBot_II (talk · contribs · deleted contribs · nuke contribs · logs · filter log · block user · block log)
EssjayBot's little brother, EssjayBot II, would like approval to archive pages in the same fashion as Crypticbot did. This is not the Crypticbot code, but a pywikipedia version worked up by Misza13. It archives based on the last timestamp in a section, working in whole-number days; sections without timestamps will be untouched.
Since WerdnaBot is handling most of Crypticbot's old haunts, my intention is to set it up to do various other pages that are not regularly archived: WP:CHU, WP:BN, and the talk pages of each. My intent is to have anything older than 2 days archived off CHU, anything older than 7 days archived off BN, and anything older than 10 days archived off the talk pages. Anyone interested in using EssjayBot II for archiving should drop me a talk page message.
I've done a quick test in my sandbox and everything seems in order. Due to the nature of the bot (archiving pages) and the limited number of times it should be editing, I don't think a flag is needed or desirable. Approval requested for the pages mentioned and others as needed. Essjay (Talk) 16:49, 26 July 2006 (UTC)
- Sounds fine just fine, fine just fine, fine. Recomend no flag. — xaosflux 23:49, 26 July 2006 (UTC)
- Approved for one week trial (or however long it will take to provide the diffs), post results when complete. — xaosflux 02:47, 27 July 2006 (UTC)
- Will this bot be creating new archive pages, or just moving discussions? — xaosflux 02:50, 27 July 2006 (UTC)
- It creates archive pages where necessary; the archive number is hard coded for now, so when it needs to move to a new archive, I just go in and increase it by one, and if the page doesn't exist, it will create it. (This makes it unsuitable for large pages like AN/ANI right now, but I hope to have an autoincrementer available before too long.) It's already done archiving of BN, & the talk pages of BN & CHU, as demonstrated in it's contribs. I've also offered the functionality to the Arbitration Committee for their talk page, and perhaps the Requests for Clarification section of RFAR. Due to the low volume of comments on BN and the talk pages of either page, CHU will probably be the only page it archives in the next few days. Essjay (Talk) 05:01, 27 July 2006 (UTC)
- Looks a lot like Werdnabot, please point out the differences, cheers —Minun 13:18, 27 July 2006 (UTC)
- Apology, I checked the user page and understand now, so I agree with the new bot —Minun 13:26, 27 July 2006 (UTC)
Oy, I notice that Werdna648 (talk · contribs), who runs Werdnabot (talk · contribs), has not edited since July 20th (a week) and indicates he may not be back for several months. Given that Crypticbot, which did what WerdnaBot is doing now, was quickly blocked when Cryptic left, it looks like I may need to have EssjayBot II do some of the pages Crypticbot/WerdnaBot was doing, in particular, AN & ANI (I don't want to get into user talk page archiving as WerdnaBot did). I'm going to ask Misza13 to look into an auto-incrementing archive option (right now you have to hand-insert the archive number into the code) in case EssjayBot II is needed for AN/ANI. I would normally say that this would be included in the original authorization I requested ("the pages mentioned and others as needed") but I thought it best to bring it up anyway. Essjay (Talk) 05:01, 28 July 2006 (UTC)
- Having several archive algorythm holding bots exist is fine by me (even VoABot has some build into the RfPP archive, though it is a bit more complex). I'd rather they be maintaned by active editors like Essjay.Voice-of-All 07:47, 28 July 2006 (UTC)
I've gotten a request to put the bot on ANI, as Werdnabot is having difficult with sections that have === subsections. My intent is to have the bot archive anything older than 24 hours once a day at midnight; I would go with longer, but the page is hovering near 300KB with anything over 48 hours old being archived 4 times a day. I'm also going to put in a manual archive trigger, though I don't plan to publicize it highly since manual archiving shouldn't be needed. I've run a test in my sandbox and everything worked fine; can I get an approval to go ahead an put the bot on ANI on this schedule? Essjay (Talk) 06:04, 30 July 2006 (UTC)
- Sure go ahead
with a 1 week trial.--pgk 09:35, 30 July 2006 (UTC)- It's been working fine elsewhere, so no need for more trials. --pgk 09:53, 30 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
Approved, flagged
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:CrazynasBot
This bot would run using AWB in bot mode, replacing _ with on Templates subst'ed by User:BetacommandBot. Crazynas 01:10, 29 July 2006 (UTC)
- That seems fine. Approved. Essjay (Talk) 02:26, 29 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:Alaibot, stub template merger
There's a substantial backlog of re-sorting tasks for already-identified stubs in oversized categories. Sometimes these are double-stubbed with two stub types that now have been given a common child type, and this is in such cases becomes a purely mechanical task of replacing {{X-stub}} and {{Y-stub}} with {{X-Y-stub}}. To help deal with these, I've written a simple extension of replace.py (please excuse my Python) which does such replacements, on an "all or nothing" basis (that is, it makes no change if it's not able to replace both original templates with the new one). This would run on lists of double-tagged articles produced from an offline partial database dump, and/or StubSense. As this uses different code from that already trialed and approved, I suggest a new trial run under a non-bot-flagged account, User:AlaibotToo, with the final-approved (if and when) updates to be done with the existing bot-flagged account. Alai 04:15, 18 July 2006 (UTC)
- No problem, do the test with "Too" and you can run the full deal on whichever you decide after the trial. Essjay (Talk) 00:44, 19 July 2006 (UTC)
- Trial started, a couple of hundred done so far; feel free to review whenever's good for the AG. Addendum: I'd also like to "roll in" the related task of re-stubbing single existing stub types, using the standard template.py code. Should be straightforward from a code point of view, being unmolested by my rehacking, the art in this instance is running it on the right stubs... (Generated from category intersection, as above.) For example: the members of both Category:Organization stubs and Category:Non-profit organizations, changing the stub type (only) to Category:Non-profit organization stubs. Alai 06:03, 20 July 2006 (UTC)
- Ran trial of both the above functions; no problems that were evident to me (aside from an error caused by operator typo in the first of a "batch" (I always inspect these for just such an eventuality), which I fixed up by hand), and no hate-mail. No additional flagging required, please for final approval. Alai 04:20, 25 July 2006 (UTC)
- Approved. Essjay (Talk) 04:59, 25 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.
- The following discussion is an archived debate. Please do not modify it. Subsequent comments should be made in a new section.
User:RebelRobot
The bot is manually assisted, performing interwiki links and standardization, plus handling double redirects. It runs in the pywikipedia framework. It shall mostly see to that articles from the Romanian Misplaced Pages get linked to their homologues in the English one. --Rebel2 19:15, 17 July 2006 (UTC)
- No problem. Give it a week's trial run and report back this time next week. Essjay (Talk) 00:43, 19 July 2006 (UTC)
Alright, week's over. Seems fine to me. --Rebel2 02:48, 27 July 2006 (UTC)
- Indeed. I see no problem with full approval, flag set. Essjay (Talk) 00:28, 28 July 2006 (UTC)
- The above discussion is preserved as an archive of the debate. Please do not modify it. Subsequent comments should be made in a new section.