Revision as of 19:45, 19 November 2024 editCryptic (talk | contribs)Administrators41,591 editsm →Median account age for EXTCON: rm exact timestamp, since got a mean of two values on the quarry version - both registered the same day though← Previous edit | Latest revision as of 16:45, 25 December 2024 edit undoCX Zoom (talk | contribs)Edit filter helpers, Extended confirmed users, Page movers, Pending changes reviewers19,277 edits →Orphaned editnotices: ReplyTag: Reply | ||
(42 intermediate revisions by 10 users not shown) | |||
Line 25: | Line 25: | ||
] | ] | ||
== Draftifications by month == | |||
== List of articles likely to have one or no sources == | |||
Hi everyone. Cryptic kindly created which shows how many draftifications took place between 2021-07 and 2022-08. Could someone please fork it to show dates from 2016 to 2024? If it's easier, I'm fine with seeing the number of draftifications by year instead of by month. Many thanks and best wishes, ] (] <nowiki>|</nowiki> ]) 03:38, 14 December 2024 (UTC) | |||
While making ] recently, it occurred to me that we ought to have a way of at least semi-automatically identifying and tagging articles with either a single or no sources. I'd like to be able to do an AWB run of likely such articles. | |||
:I've rerun the query in-place. —] 14:19, 14 December 2024 (UTC) | |||
::Beautiful, thank you so much Cryptic! ] (] <nowiki>|</nowiki> ]) 16:29, 14 December 2024 (UTC) | |||
== List of all file redirects that are in use in mainspace == | |||
Given that there are many different ways to do sources, I'd like to start with a conservative query, which lists all articles that contain <em>none</em> of the following strings: | |||
* <ref | |||
* http:// | |||
* Notes | |||
* cite | |||
* Reference | |||
* Sources | |||
* Citation | |||
* Bibliography | |||
* sfn | |||
I wrote a query that lists all file redirects, at ]. Can this query be expanded to only list file redirects that are used in mainspace somewhere? –] <small>(])</small> 22:26, 19 December 2024 (UTC) | |||
I don't know how to construct a RegEx query with a negative (the internet , but I struggle to convert this into Misplaced Pages's flavor), so I'd appreciate some help. Could anyone help me generate this list? Cheers, <span style="border:3px outset;border-radius:8pt 0;padding:1px 5px;background:linear-gradient(6rad,#86c,#2b9)">]</span> <sup>]</sup> 05:14, 14 November 2024 (UTC) | |||
: |
:]. —] 22:56, 19 December 2024 (UTC) | ||
: is a start. It gives 10000 results then times out. ] ] 06:24, 14 November 2024 (UTC) | |||
::You'll want to at least make that case-insensitive, anchor "ref" and maybe "cite" to word boundaries, and match "https://" too. But still, ] isn't ]. —] 06:34, 14 November 2024 (UTC) | |||
:::...holy crap, it is. It ''shouldn't'' be. —] 06:35, 14 November 2024 (UTC) | |||
::::The underlying ElasticSearch cluster has a read-only ], which can be ''queried''. So I'd say this page is the right place for such requests. – ] (]) 07:41, 14 November 2024 (UTC) | |||
:::::If someone comes here looking for help with Elasticsearch's middle-end, they're going to be very, very disappointed. —] 08:13, 14 November 2024 (UTC) | |||
::Thanks, @]! After expanding the query to <code><nowiki>-insource:/(ef|http|otes|ite|ources|itation|ibliography|sfn|list of|lists of|link|further reading|Wiktionary redirect)/ -intitle:list -deepcategory:"Set index articles"</nowiki></code> it's starting to turn up mostly useful results. Cheers, <span style="border:3px outset;border-radius:8pt 0;padding:1px 5px;background:linear-gradient(6rad,#86c,#2b9)">]</span> <sup>]</sup> 07:17, 14 November 2024 (UTC) | |||
:::You can get more results before it times out by adding more non-regex filters. For instance, adding <code>-hastemplate:"Module:Citation/CS1"</code> gives 15k results instead of just 2k. – ] (]) 07:39, 14 November 2024 (UTC) | |||
:Anyway, the sort of things ''this'' page can do to answer your original question are to give you lists of pages with zero, or zero or one, external links, or that don't transclude any of a set of templates, or both; and as a bonus filter out redirects (which I'm fairly sure search does whether you like it or not), disambigs, and - to some extent - list pages. —] 07:16, 14 November 2024 (UTC) | |||
:: Maybe rename this page to ]. ] ] 20:13, 14 November 2024 (UTC) | |||
:::Or we could ask people to read past the page title to the first two sentences. —] 02:55, 15 November 2024 (UTC) | |||
== Update to NPP reports == | |||
== Syntax error due to using a reserved word as a table or column name in MySQL == | |||
Is it possible to add a link to the {{tq|#}} column at ] with an xtools redirs created link. It can target ] | |||
https://quarry.wmcloud.org/query/87911 | |||
Similarly for ] targeting ] Thanks! <span style="font-family:monospace;font-weight:bold">]:<]></span> 15:49, 20 December 2024 (UTC) | |||
https://stackoverflow.com/questions/23446377/syntax-error-due-to-using-a-reserved-word-as-a-table-or-column-name-in-mysql | |||
:] and ]. —] 18:56, 20 December 2024 (UTC) | |||
::Thanks a lot <span style="font-family:monospace;font-weight:bold">]:<]></span> 04:06, 21 December 2024 (UTC) | |||
It isn't handling the `user` table right as "user" is an SQL reserved word, I think. | |||
== Measuring the number of source links to each domain for a given article/set of articles == | |||
The syntax highlighter was showing "user" in red, so I surrounded it with backticks `user`, then it was showing in light blue. | |||
==== Command denied==== | |||
I keep getting the error, "execute command denied to user 's52788'@'%' for routine 'enwiki_p.count'". I was using the page database, but even after I modified my query to only use the externallinks database (meaning I need to input a numerical page ID instead of using the title), I'm still getting the denial. What am I doing wrong here? Am I just not allowed to aggregate? Here's my query, simplified as much as possible and still not working: | |||
I think it needs to be highlighted in white to work correctly. But how? ] (]) 18:47, 14 November 2024 (UTC) | |||
: Unrelated to the reserved word. `WHERE IS NULL(u.user_name)` should be `WHERE u.user_name IS NULL`. But see prior noise at ] if you want to continue this. ] ] 20:12, 14 November 2024 (UTC) | |||
::https://www.w3schools.com/sql/sql_isnull.asp indicates that my syntax should be valid. Two alternative ways to do the same thing? Regarding the "prior noise", I'm a more competent administrator who's checking page histories, and leaving redirects within user space alone. My current focus is on cross-namespace redirects from user pages of nonexistent users to outside of userspace. My recent deletion log will give you an idea; I'm trying to make a more specific query to reduce the noise level in the query results I've been working from. – ] (]) 20:53, 14 November 2024 (UTC) | |||
::: Wikimedia uses MySQL (actually ] which uses MYSQL-ish syntax), not SQL server where your link says <code>ISNULL</code> (not <code>IS NULL</code> which the query uses) is valid. ] ] 21:06, 14 November 2024 (UTC) | |||
::::MariaDB , and it works the way Wbm1058 was trying to use it (modulo the misplaced space). SQL Server's ISNULL() is a synonym of COALESCE() instead. x IS NULL is generally safer precisely because of that incompatibility. —] 21:29, 14 November 2024 (UTC) | |||
:::::I tried just changing the syntax of the "IS NULL" statement as suggested. It was cooking on that for a while, and then: | |||
:::::: "'''Error''' | |||
::::::This web service cannot be reached. Please contact a maintainer of this project. | |||
:::::: | |||
::::::Maintainers can find troubleshooting instructions from our documentation on Wikitech." | |||
::::: Hopefully my query didn't just crash the server. – ] (]) 21:55, 14 November 2024 (UTC) | |||
::::::It just ran to completion, so simply changing the "IS NULL" statement fixed the syntax error. Now on to figure out the results, and tweak the query to do what I really want it to do. Thanks for your help. ] (]) 22:09, 14 November 2024 (UTC) | |||
FYI, I'm now feeling the joy. ] is my report of 400 pages which I think may all be safely speedy-deleted under ''U2: Userpage or subpage of a nonexistent user''. This report was culled from a ], by INTERSECT with the user table SELECT. This is indicative of the poor page-move interface design, which leads editors who think they're publishing user drafts to keep pages in userspace when they really wanted to move to mainspace, because they neglected the namespace dropdown in the move-page user interface. – ] (]) 14:11, 15 November 2024 (UTC) | |||
SELECT count (el_to_domain_index) | |||
== Dusty articles within all subcategories of a parent category == | |||
FROM externallinks | |||
WHERE el_from = 37198628 | |||
GROUP BY el_to_domain_index; | |||
] (]) 23:14, 21 December 2024 (UTC) | |||
Is this possible? I'd like to get a list like ] but for anything within any subcategory of ]. It would make quite a nice little To Do list for times I feel like doing some research and writing but don't have a particular bee in my bonnet that very minute. Thanks for any help! ] (]) 15:16, 17 November 2024 (UTC) | |||
:Remove the space between <code>count</code> and the open paren. —] 23:21, 21 December 2024 (UTC) | |||
:What is "dusty"? Neither ] nor ] say what it does. Is it a sort by timestamp of last edit?{{pb}}In direct subcategories only, include the handful of pages directly in the category, or the whole tree? If the last, to what depth? Examples: ]→]→]→] is depth 2, and ]→]→]→]→] is depth 3; neither the page itself nor the root category count. —] 16:51, 17 November 2024 (UTC) | |||
::(Or <code>set sql_mode = 'IGNORE_SPACE';</code> first. —] 23:24, 21 December 2024 (UTC)) | |||
::Yes, it's a list of articles by date of most recent edit. | |||
::Wow. Thank you. ] (]) 23:29, 21 December 2024 (UTC) | |||
::Hm, on the second question. Ideally I'd end up with is a list of, say, food items that hadn't been edited in ten years. Or chefs, or restaurants, or food animals or whatever. Maybe I need to choose a more specific subcategory? ] (]) 17:24, 17 November 2024 (UTC) | |||
:::Well, ok, | |||
:::*] is in ], but, strictly speaking, isn't in "any subcategory of ]". Should it be included in the list? | |||
:::*] is in ], which is a subcategory of ], so it should be. (It's also directly in ], but never mind that.) | |||
:::*] isn't in ] or any of its immediate subcategories, but it ''is'' in ], a subcategory of ]; so the article's in a sub-subcategory of ]. Include or not? | |||
:::*] isn't in Food or drink, its immediate subcategories, or (I think) any of ''their'' subcategories, but it's in ], a subcat of ], so it's at least in a sub-sub-subcategory of ]. Same question. | |||
:::The reason I need a maximum depth is because - like almost all reasonably broad categories - ] eventually includes a significant portion of ''all'' categories. Depth 10, for example, has 122639 different categories in the tree, out of 2.4 million total categories (including maintenance categories, category redirects, and so on), and you really quickly start getting unrelated pages like ] → ] → ] → ] → ] → ] → ] → ].{{pb}}Or, if you like, you can give me a list of categories to pull from. Even if it's a large list, or something like "Anything in any direct subcategory of ], ], ], ], ". —] 18:33, 17 November 2024 (UTC) | |||
::::Oh, and do you want non-mainspace pages in the list or not? What about redirects? —] 18:38, 17 November 2024 (UTC) | |||
:::::lol...clearly in over my head here. :D Thank you for your patience. | |||
:::::So, no to feed a cold, starve a fever. Yes to recipe, dulce de leche and ice milk. | |||
:::::I think maybe start with something that's likely to contain fewer extraneous things. ] in a way that will allow me to see, for instance, the articles that are in ] > ] > ] > ] that haven't been edited in the last ten years. ] (]) 18:50, 17 November 2024 (UTC) | |||
::::::Oh, no non-mainspace pages, no redirects. ] (]) 19:13, 17 November 2024 (UTC) | |||
:::::::None quite that old in either tree. ] for ] depth 3 (oldest is ], 2015-11-16 18:36:35 - see what I meant about unrelated pages?), ] for ] depth 4 (oldest is ], 2019-12-16T04:47:03). —] 19:19, 17 November 2024 (UTC) | |||
::::::::Well, thank you for your work, and sorry to waste your time! ] (]) 19:35, 17 November 2024 (UTC) | |||
:::::::::Not wasted at all. Not your fault the category system is terrible for datamining.{{pb}}There might be some value in finding the latest revision that wasn't marked minor, and maybe excluding ones made by bots too, but that's going to be harder and a lot slower. Would definitely need to cut the set of articles to look at to something on the order of a couple thousand before looking at the edits, rather than the tens of thousands in that first tree. —] 20:14, 17 November 2024 (UTC) | |||
::::::::::Thanks. And I've actually already found an article that needs attention from your 87976 query, so win! | |||
::::::::::The point for me here is looking for categories that have many articles that haven't been updated since before sourcing started modernizing. It's a bit tricky because the articles that were created first -- probably in any category -- are also likely the articles that get update often, have multiple watchers, etc. So it's possible there just aren't huge numbers of food articles that need this kind of attention. ] (]) 21:18, 17 November 2024 (UTC) | |||
==== Lag, no results returned ==== | |||
== Number of articles that are actually articles == | |||
<s>Now I'm trying to get counts for all the external links from all the pages in a category. I want to do this for each separate page, and get lists of all the actual URLs, but y'know, baby steps. I used this query: https://quarry.wmcloud.org/query/89031 | |||
There are {{NUMBEROFARTICLES}}, but AIUI that includes disambig pages, stand-alone lists, and outlines, and maybe even portals (i.e., all content namespaces, not just the mainspace) but excludes redirects. Is there a way to get a count of the number of just plain old ordinary articles, excluding the other types? (A percentage from a sample set is good enough; I'd like to be able to write a sentence like "Of the 6.9 million articles, 6.2 million are regular articles, 0.45 million are lists, and 0.2 million are disambig pages.") ] (]) 22:46, 17 November 2024 (UTC) | |||
: {{re|WhatamIdoing}} according to ], there are 362,957 of those. ] ] 23:29, 17 November 2024 (UTC) | |||
::] suggests that there are about a thousand of those, which will not have a material effect on the numbers. | |||
::] says they've tagged 131K pages. There are about 123,700 pages with "List of" or "Lists of" at the start of the title. ] (]) 00:37, 18 November 2024 (UTC) | |||
:There is no clear definition of what a "regular article" is. Also many pages are not correctly marked and categorized. Don't for ] which look like ordinary articles, or might be, depending. -- ]] 00:43, 18 November 2024 (UTC) | |||
:<nowiki>{{NUMBEROFARTICLES}}</nowiki> seems to be mainspace non-redirect pages. I'd thought it used other heuristics, too; I remember needing at least one link, and less confidently requiring a period? But plainly doesn't anymore; I'm getting 6912240 for ns0 !page_is_redirect on the replicas now.{{pb}}There's only 362201 non-redirects in ] and mainspace. Most of the difference are in other namespaces, probably legitimately, though I'm surprised to see 208 in user:, 44 total in various talk namespaces, 9 mainspace redirects, and a single redirect in draftspace.{{pb}}114253 mainspace non-redirects in ], though 64 of those are in the disambig cat as well.{{pb}}Lists are less certain; there's no ]. I could try to count pages that are in any category starting with "Lists " or ending with " lists", but - not having done that before - don't have any idea how many it would miss, and how many it would miscount. Ditto with pages starting with "List of " or "Lists of " (which is easy - 120653, not counting any redirs or pages in the dabs or set index cats). —] 01:00, 18 November 2024 (UTC) | |||
::Oh, and 11193167 redirects (so 18105407 total mainspace pages), if you care. —] 01:03, 18 November 2024 (UTC) | |||
::So 6,912,230 non-redirect pages, of which 362,201 are dab pages and 120,653 are Lists (per title), and the rest (e.g., Outlines, Index pages) is immaterial. A good SIA looks like an article and an underdeveloped one looks like a dab page, which takes us back to GreenC's point about it not being a clear-cut question. | |||
::All of this suggests that if you count SIAs as 'articles', then there are 6.429 million articles (93%) and if you count them as lists/dabs, then there are 6.315 million articles (91%). | |||
::Thanks, all. ] (]) 01:15, 18 November 2024 (UTC) | |||
USE enwiki_p; | |||
== Median account age for EXTCON == | |||
SELECT el_to_domain_index, | |||
count(el_to_domain_index) | |||
FROM externallinks | |||
JOIN categorylinks ON cl_from = el_from | |||
WHERE cl_to = 11696843 | |||
GROUP BY el_to_domain_index | |||
ORDER BY count(el_to_domain_index) DESC; | |||
I'm not getting any results and it takes ages to not get them. What am I doing wrong now? Also, how do I include pages in any subcategories, or does this include them automatically? ] (]) 00:57, 22 December 2024 (UTC) | |||
Hello again, generous satisfiers of curiosity: | |||
</s> | |||
I figured out that I need to use page despite the slowness it'll cause, because cl_to uses a name instead of an ID. So here is my new query, now also running on simplewiki for easier testing. https://quarry.wmcloud.org/query/89032 | |||
Today's question is how old the typical currently active ] account is. The requirements are: | |||
USE simplewiki_p | |||
* is currently active (perhaps made at least one edit during the last 30 days? Any plausible definition of active works for me, so pick whatever is easiest and cheapest to run) | |||
SELECT page_title, | |||
* meets EXTCON (all of which will have the EXTCON permission) | |||
el_to_domain_index, | |||
count(el_to_domain_index) | |||
FROM externallinks | |||
JOIN categorylinks ON cl_from = el_from | |||
JOIN page on cl_from = page_id | |||
WHERE cl_to = Canada | |||
GROUP BY page_title, el_to_domain_index; | |||
This query though has a syntax error on line 2. | |||
I don't care whether it's date of first edit vs registration date. I also don't care whether it's all ~73K of them or if it's a sample of 5K–10K. I am looking for an answer to the nearest year ("Most active EXTCON editors started editing before 2014" or "If you see someone editing an article under EXTCON, they probably started editing more than 10 years ago"). | |||
I also think I might be in the wrong place to ask for step-by-step help like this. If there's a better place for me to go, I'd appreciate the direction. ] (]) 02:18, 22 December 2024 (UTC) | |||
Thank you, ] (]) 17:14, 19 November 2024 (UTC) | |||
:You don't need the USE statement on Quarry since you have to select a database there separately (since most are on different servers now); but if you keep it, you need to terminate it with a semicolon.{{pb}}Next error you'd get is that you need to quote 'Canada'. At least that one has a useful error message ("Unknown column 'Canada' in 'where clause'").{{pb}}The reason your first query took forever is because <syntaxhighlight lang='sql' inline>SELECT * FROM categorylinks WHERE cl_to = 11696843;</syntaxhighlight> does a full table scan - it tries to coerce each row's cl_to (a string value) into a number, and then does a numeric comparison. There's no correct way to use the index on cl_to since many different strings compare equal to that number, in particular ones starting with whitespace. <syntaxhighlight lang='sql' inline>SELECT * FROM categorylinks WHERE cl_to = '11696843';</syntaxhighlight> on the other hand finishes instantly with no results (since ] has no members). Categories are only loosely tied to the page at their title anyway.{{pb}}You won't get members of subcategories like that - you have to go far out of your way to do so, similar to ]. You ''would'' get the direct subcategories like ] themselves, if any happened to have any external links. Distinguish them by selecting page_namespace too, if you're not already filtering by it. —] 02:56, 22 December 2024 (UTC) | |||
::It sounds like I'm better off doing a multipart kludge- getting all the relevant page titles with Massviews or Petscan, running a single query to turn them into IDs, then using those IDs as el_froms so I only need the externallinks database. Thank you for your help! ] (]) 05:59, 22 December 2024 (UTC) | |||
== Orphaned editnotices == | |||
:Hmm. user_touched has been scrubbed because it is private data. So I guess LEFT JOIN recentchanges to see who is active? This should only get us users who have made an edit in the last 30 days. Then run MEDIAN() on it. Let's see if ] finishes. If the count is 72,000ish, then I also need to add a WHERE to filter out the editors who aren't in recentchanges. –] <small>(])</small> 18:33, 19 November 2024 (UTC) | |||
::That's going to get you not just every user with the user right - the whole point of a left join is that you get a result whether there's a row in the joined table or not - but a row in your resultset for ''every'' row they have in recentchanges. And you're leaving out admins, who have extended confirmed implicitly. Plus, even if it worked, it would be a dump of ~25k values.{{pb}}Mariadb has , but I can't get it to work on user_registration no matter how I try to preprocess it first - it gives me "Numeric datatype is required for percentile_cont function" when I call it directly on the column, which is reasonable, but always 100000000 if I cast it any kind of numeric value, which isn't. (Anyone know what I'm doing wrong? ].) But doing it longhand works just fine. ]: 28 May 2013. —] 19:37, 19 November 2024 (UTC) | |||
When a page is moved, its ] is not moved with it. There is a post-move warning for it, but users would need to move it separately. That too can only be done by template editors, page movers and admins. I believe that there are plenty of editnotices that have become orphaned from their target page. I need a query to list such pages. If there is already a regularly updated database, that will work too. Thanks! <span class="nowrap">—''']'''</span> <sup class="nowrap">(] • {]•]})</sup> 07:53, 25 December 2024 (UTC) | |||
:Here's mainspace only to get you started: ]. You or someone else can fork and improve this if you need additional namespaces. Making this a database report somewhere using {{t|Database report}} might be a good idea. Hope this helps. –] <small>(])</small> 08:42, 25 December 2024 (UTC) | |||
::I suspect it's much worse than that. It's certainly more complex.{{pb}}There's plenty of mainspace titles with colons in them, and it's conceivable that some of those have orphaned editnotices; there's really no way around parsing for the namespace name, and that's going to be ugly and complex, and I haven't tried it yet. (It being Christmas morning and all. Maybe tomorrow.) But I wouldn't estimate that to result in more than a handful of other hits.{{pb}}Much more likely is the case that CX Zoom mentions directly: a page is moved but the editnotice isn't, leaving it attached to the remnant redirect. There's going to be false positives looking for those whether we do it the "correct" way and look in the move log (since there might be an editnotice intentionally attached to a page that had another page moved from it in the past), or whether we include editnotices attached to pages that are currently redirects. The latter's easier, and especially easier to combine with the query looking for pages that don't exist; I've done it at ]. That'll also miss editnotices that unintentionally weren't moved with their page, where the resulting redirect was turned into a ''different'' page, though. —] 15:30, 25 December 2024 (UTC) | |||
:::Thank you very much, both of you... <span class="nowrap">—''']'''</span> <sup class="nowrap">(] • {]•]})</sup> 16:45, 25 December 2024 (UTC) |
Latest revision as of 16:45, 25 December 2024
Page for requesting database queries
Archives | |||||
|
|||||
This page has archives. Sections older than 14 days may be automatically archived by Lowercase sigmabot III when more than 4 sections are present. |
This is a page for requesting one-off database queries for certain criteria. Users who are interested and able to perform SQL queries on the projects can provide results from the Quarry website.
You may also be interested in the following:
- If you are interested in writing SQL queries or helping out here, visit our tips page.
- If you need to obtain a list of article titles that meet certain criteria, consider using PetScan (user manual) or the default search. Petscan can generate list of articles in subcategories, articles which transclude some template, etc.
- If you need to make changes to a number of articles based on a particular query, you can post to the bot requests page, depending on how many changes are needed.
- For long-term review and checking, database reports are available.
Quarry does not have access to page content, so queries which require checking wikitext cannot be answered with Quarry. However, someone may be able to assist by using Quarry in another way (e.g. checking the table of category links rather than the "Category:" text) or suggest an alternative tool.
Draftifications by month
Hi everyone. Cryptic kindly created this query which shows how many draftifications took place between 2021-07 and 2022-08. Could someone please fork it to show dates from 2016 to 2024? If it's easier, I'm fine with seeing the number of draftifications by year instead of by month. Many thanks and best wishes, Clayoquot (talk | contribs) 03:38, 14 December 2024 (UTC)
- I've rerun the query in-place. —Cryptic 14:19, 14 December 2024 (UTC)
- Beautiful, thank you so much Cryptic! Clayoquot (talk | contribs) 16:29, 14 December 2024 (UTC)
List of all file redirects that are in use in mainspace
I wrote a query that lists all file redirects, at quarry:query/88966. Can this query be expanded to only list file redirects that are used in mainspace somewhere? –Novem Linguae (talk) 22:26, 19 December 2024 (UTC)
Update to NPP reports
Is it possible to add a link to the #
column at Misplaced Pages:New_pages_patrol/Reports#Unreviewed_new_redirects_by_creator_(top_10) with an xtools redirs created link. It can target xtools:pages/en.wikipedia.org/USERNAME/0/onlyredirects
Similarly for Misplaced Pages:New_pages_patrol/Reports#Unreviewed_new_articles_by_creator_(top_10) targeting xtools:pages/en.wikipedia.org/USERNAME/all Thanks! ~/Bunnypranav:<ping> 15:49, 20 December 2024 (UTC)
- Done and done. —Cryptic 18:56, 20 December 2024 (UTC)
- Thanks a lot ~/Bunnypranav:<ping> 04:06, 21 December 2024 (UTC)
Measuring the number of source links to each domain for a given article/set of articles
Command denied
I keep getting the error, "execute command denied to user 's52788'@'%' for routine 'enwiki_p.count'". I was using the page database, but even after I modified my query to only use the externallinks database (meaning I need to input a numerical page ID instead of using the title), I'm still getting the denial. What am I doing wrong here? Am I just not allowed to aggregate? Here's my query, simplified as much as possible and still not working:
SELECT count (el_to_domain_index) FROM externallinks WHERE el_from = 37198628 GROUP BY el_to_domain_index;
Safrolic (talk) 23:14, 21 December 2024 (UTC)
- Remove the space between
count
and the open paren. —Cryptic 23:21, 21 December 2024 (UTC)- (Or
set sql_mode = 'IGNORE_SPACE';
first. —Cryptic 23:24, 21 December 2024 (UTC)) - Wow. Thank you. Safrolic (talk) 23:29, 21 December 2024 (UTC)
- (Or
Lag, no results returned
Now I'm trying to get counts for all the external links from all the pages in a category. I want to do this for each separate page, and get lists of all the actual URLs, but y'know, baby steps. I used this query: https://quarry.wmcloud.org/query/89031
USE enwiki_p; SELECT el_to_domain_index, count(el_to_domain_index) FROM externallinks JOIN categorylinks ON cl_from = el_from WHERE cl_to = 11696843 GROUP BY el_to_domain_index ORDER BY count(el_to_domain_index) DESC;
I'm not getting any results and it takes ages to not get them. What am I doing wrong now? Also, how do I include pages in any subcategories, or does this include them automatically? Safrolic (talk) 00:57, 22 December 2024 (UTC)
I figured out that I need to use page despite the slowness it'll cause, because cl_to uses a name instead of an ID. So here is my new query, now also running on simplewiki for easier testing. https://quarry.wmcloud.org/query/89032
USE simplewiki_p SELECT page_title, el_to_domain_index, count(el_to_domain_index) FROM externallinks JOIN categorylinks ON cl_from = el_from JOIN page on cl_from = page_id WHERE cl_to = Canada GROUP BY page_title, el_to_domain_index;
This query though has a syntax error on line 2.
I also think I might be in the wrong place to ask for step-by-step help like this. If there's a better place for me to go, I'd appreciate the direction. Safrolic (talk) 02:18, 22 December 2024 (UTC)
- You don't need the USE statement on Quarry since you have to select a database there separately (since most are on different servers now); but if you keep it, you need to terminate it with a semicolon.Next error you'd get is that you need to quote 'Canada'. At least that one has a useful error message ("Unknown column 'Canada' in 'where clause'").The reason your first query took forever is because
SELECT * FROM categorylinks WHERE cl_to = 11696843;
does a full table scan - it tries to coerce each row's cl_to (a string value) into a number, and then does a numeric comparison. There's no correct way to use the index on cl_to since many different strings compare equal to that number, in particular ones starting with whitespace.SELECT * FROM categorylinks WHERE cl_to = '11696843';
on the other hand finishes instantly with no results (since Category:11696843 has no members). Categories are only loosely tied to the page at their title anyway.You won't get members of subcategories like that - you have to go far out of your way to do so, similar to quarry:query/87975. You would get the direct subcategories like simple:Category:Canada stubs themselves, if any happened to have any external links. Distinguish them by selecting page_namespace too, if you're not already filtering by it. —Cryptic 02:56, 22 December 2024 (UTC)- It sounds like I'm better off doing a multipart kludge- getting all the relevant page titles with Massviews or Petscan, running a single query to turn them into IDs, then using those IDs as el_froms so I only need the externallinks database. Thank you for your help! Safrolic (talk) 05:59, 22 December 2024 (UTC)
Orphaned editnotices
When a page is moved, its editnotice is not moved with it. There is a post-move warning for it, but users would need to move it separately. That too can only be done by template editors, page movers and admins. I believe that there are plenty of editnotices that have become orphaned from their target page. I need a query to list such pages. If there is already a regularly updated database, that will work too. Thanks! —CX Zoom 07:53, 25 December 2024 (UTC)
- Here's mainspace only to get you started: quarry:query/89138. You or someone else can fork and improve this if you need additional namespaces. Making this a database report somewhere using {{Database report}} might be a good idea. Hope this helps. –Novem Linguae (talk) 08:42, 25 December 2024 (UTC)
- I suspect it's much worse than that. It's certainly more complex.There's plenty of mainspace titles with colons in them, and it's conceivable that some of those have orphaned editnotices; there's really no way around parsing for the namespace name, and that's going to be ugly and complex, and I haven't tried it yet. (It being Christmas morning and all. Maybe tomorrow.) But I wouldn't estimate that to result in more than a handful of other hits.Much more likely is the case that CX Zoom mentions directly: a page is moved but the editnotice isn't, leaving it attached to the remnant redirect. There's going to be false positives looking for those whether we do it the "correct" way and look in the move log (since there might be an editnotice intentionally attached to a page that had another page moved from it in the past), or whether we include editnotices attached to pages that are currently redirects. The latter's easier, and especially easier to combine with the query looking for pages that don't exist; I've done it at quarry:query/89148. That'll also miss editnotices that unintentionally weren't moved with their page, where the resulting redirect was turned into a different page, though. —Cryptic 15:30, 25 December 2024 (UTC)
- Thank you very much, both of you... —CX Zoom 16:45, 25 December 2024 (UTC)
- I suspect it's much worse than that. It's certainly more complex.There's plenty of mainspace titles with colons in them, and it's conceivable that some of those have orphaned editnotices; there's really no way around parsing for the namespace name, and that's going to be ugly and complex, and I haven't tried it yet. (It being Christmas morning and all. Maybe tomorrow.) But I wouldn't estimate that to result in more than a handful of other hits.Much more likely is the case that CX Zoom mentions directly: a page is moved but the editnotice isn't, leaving it attached to the remnant redirect. There's going to be false positives looking for those whether we do it the "correct" way and look in the move log (since there might be an editnotice intentionally attached to a page that had another page moved from it in the past), or whether we include editnotices attached to pages that are currently redirects. The latter's easier, and especially easier to combine with the query looking for pages that don't exist; I've done it at quarry:query/89148. That'll also miss editnotices that unintentionally weren't moved with their page, where the resulting redirect was turned into a different page, though. —Cryptic 15:30, 25 December 2024 (UTC)