This is an old revision of this page, as edited by MER-C (talk | contribs) at 08:33, 2 December 2011 (→Outstanding Questions: fix). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 08:33, 2 December 2011 by MER-C (talk | contribs) (→Outstanding Questions: fix)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)Outstanding Questions
A big question for me is: what is the best way to measure the impact of this program on the community? We certainly don't want to put more work on the community to estimate this impact, but we also believe measuring the program's impact on ordinary editors is critical to having a full picture of what happened in the Pune pilot. Any ideas of how to overcome this challenge? -- LiAnna Davis (WMF) (talk) 22:14, 1 December 2011 (UTC)
- I think the best way to go about this would be to estimate some sort of "crap fraction" reflecting problems that need to be cleaned up in a similar manner to the PPP metric:
- with
- a = copyvio edits in any namespace
- b = copyvio uploads (local + commons)
- c = mainspace edits deleted or reverted for other reasons. Redirecting an article counts as reversion.
- d = mainspace edits with orange {{ambox}} problems (e.g. lack of RS, POV). If an edit has multiple problems, add the number of problems instead (e.g. unreferenced POV edit increases d by 2).
- e = mainspace edits with yellow {{ambox}} problems (e.g. wikify, copyedit). If an edit has multiple problems, add the number of problems instead.
- m = mainspace edits
- Edits should be assigned to the category with the highest weight (e.g. G12 deletion => 10 points). The weights are arbitrary but reflect the severity of the problem: k1 > k2 > k3 > k4. I suggest something like k1 = 10, k2 = 4, k3 = 2, k4 = 1. MER-C 03:36, 2 December 2011 (UTC)
CCI
Attempting to assess the impact on the community of the CCI is virtually impossible at this stage because very little of the work has been done. The investigation is less than a week old and isn't going to be finished for a long time yet (we still have open two-year-old CCIs). I suppose you could investigate what percentage of contributions have been removed as copyright violations, but that's going to produce a severe underestimate because most of the edits haven't been systematically surveyed for copyright problems. Hut 8.5 23:56, 1 December 2011 (UTC)
- Would there be data points like the backlog increase or anything like that we could use? We do have a couple of months to do this analysis, so we want to make it as thorough as possible, and if that takes waiting until the CCI process has had a chance to start so there is some time estimate able to be extrapolated, that's fine. -- LiAnna Davis (WMF) (talk) 00:03, 2 December 2011 (UTC)
- You could count the number of edits, users or pages which have been reviewed, I suppose, but such a statistic might not be very meaningful because some users are much easier to check than others (some project participants have no edits apart from adding themselves to the project, for instance). It's possible that in a few months the first page of the CCI investigation might be complete, in which case you could get an idea of the number of copyright violations produced by a significant sample of users (196). Hut 8.5 00:17, 2 December 2011 (UTC)
- Even that would be an underestimate due to copyright violations that were deleted before the CCI was run. To get a wholesome picture, you must consider both live and deleted edits. The CCI is also not complete: I noticed one user who posted a copyvio in the user talk namespace. The CCI backlog is so ridiculously large that the proportional impact of the IEP is rather small. The best way to assess impact on the community would be a crap edit (all namespaces+Commons)/potentially useful edit fraction weighted heavily towards copyvios. MER-C 02:13, 2 December 2011 (UTC)
- You could count the number of edits, users or pages which have been reviewed, I suppose, but such a statistic might not be very meaningful because some users are much easier to check than others (some project participants have no edits apart from adding themselves to the project, for instance). It's possible that in a few months the first page of the CCI investigation might be complete, in which case you could get an idea of the number of copyright violations produced by a significant sample of users (196). Hut 8.5 00:17, 2 December 2011 (UTC)
Is analysis needed?
"We want to do a thorough analysis of the Pune pilot program to derive learnings and trends"
Or you could spend a few minutes asking some of the people on the (virtual) ground. This doesn't need statistics - a qualitative analysis (which half-a-dozen names could deliver in less than half-an-hour) would tell WMF more than I still suspect they want to hear. A snazzy Powerpoint with figures on is no excuse for A Clue. Some of the basics of what went wrong are very obvious and they just need fixing. We don't care whether they went a lot wrong or a little, they went too wrong, and they need to be avoided completely in the future. Analysing trends will not explain any of this.
I'd note also that some of the really deep questions rely on the so-far invisible relationship between the WMF and the Indian colleges. Who expected to gain what from this entire process? Without knowing what the intention ever was, it's hard to know just where it went wrong. Andy Dingley (talk) 00:18, 2 December 2011 (UTC)
- Hi Andy, Please note that both quantitative and qualitative analyses are happening! As is mentioned, Tory Read (our outside evaluator) is interviewing a few Wikipedians for her report (I see from Hisham's talk page that you were one of the ones asked). We've also been taking note of all the information that's been previously mentioned on the talk pages. If you think we need more editor interviews, suggest that! We want to do such a thorough analysis because everyone we talk to has a different answer of what's wrong with our program, and I think that's because our program design was flawed on multiple points. Fixing one or two of the problems will just result in another frustrating pilot; we want to take the time to plan the next one right, and that involves a very thorough analysis to identify all the reasons why our first pilot failed, and what we can do to fix those problems.
- I'm glad you said you think the relationship between WMF and the colleges is invisible, as I wasn't aware that was an issue. The Misplaced Pages Education Program (this is true for our program in India as well as our current programs in the U.S. and Canada) is designed to get more people contributing to Misplaced Pages and to improve the quality of the content of Misplaced Pages (you'll recognize these as goals from the Strategic Plan). Obviously the quality part failed miserably in our Pune pilot, but data from our U.S. pilot showed that article quality improved 64%, which led us to expand the pilot to other countries, including India. The India program is designed to specifically address the Global South parts of the Strategic Plan. You can see the benefits for instructors on the Education Portal's Reasons to Use Misplaced Pages page. If you're looking for more details on the communication between WMF and various other parties in India, I encourage you to check out the Misplaced Pages:India_Education_Program/Documentation page. I'd be happy to clarify any other questions you have about this -- I'm really sorry it wasn't clearer before! -- LiAnna Davis (WMF) (talk) 00:55, 2 December 2011 (UTC)
- Extolling the virtues of some relatively minor successes is only to cloud the real issues. The Pune experiment was a major success in that it has clearly demonstrated how not to plan, organise, and execute what are nevertheless extremely important initiatives in the effort to expand the encyclopedias and reach out to other cultures. The endemic gap between the salaried operatives and the volunteers needs to be closed, and there should be less unilateral action on the part of the organisers without listening to the community amongst whom are also some experts (who are not paid). The organisers have been aware for a long time that the relationship between WMF and the colleges is invisible, as well as the issues concerning the general planning and management of the projects from the top down and the absence of communication and cooperation with the online community.--Kudpung กุดผึ้ง (talk) 03:59, 2 December 2011 (UTC)
Programmatic results
To be certain we are measuring what needs to be measured, I'd like to see more detail regarding the parameters/dimensions of the program. I suggest starting with a list of the program goals and the planned program components and add a list of the unintended/unexpected outcomes. Then quantify those items and analyze the relationships among them. Jojalozzo 00:51, 2 December 2011 (UTC)
- Great idea. I'll start working on this list. -- LiAnna Davis (WMF) (talk) 01:11, 2 December 2011 (UTC)
Core issues
- The answers do not lie in added bureaucracy: proposed solutions and metrics are all available already in the many and various comments by community members on the increasing maze of talk pages connected with the IEP and USEP projects. The impact on the community is blatantly obvious - why keep creating yet more pages to add to the confusion, and carry out further costly analysis when the answers are already staring us in the face complete with charts and graphs already provided, and have been discussed in depth on the mailing lists and other obscure lines of discussion? The main answer lies in rectifying the continuing lack of communication, transparency, and admission of errors (see 'introspective' in my post above). The main concern raised from the Pune pilot from the community angle is not being addressed, and LDavis and/or AnnieLin have made it clear elsewhere that they do not consider it part of their remit to take the community resources - the very impact that is being mentioned here - into consideration when planning their education projects; ignoring the known problems and trying to find solutions to new new ones that apparently still need to be identified is a redundant exercise. Ultimately, this will simply foster more ire and drive yet more volunteers and OAs away from wanting to be helpful, rather than solicit their aid which in any case can only be to repeat what they have already said time and time again. Solutions and suggestions have been tossed around by some extremely competent and knowledgeable members of the volunteer force only to land repeatedly in some kind of no man's land between the WMF and the community. It is imperative to understand that all education programmes will generate more articles - which is of course the goal of the initiatives, and which is recognised and supported in principle by everyone - that will still need to be policed by experienced regular editors, and that these programmes cannot be implemented before the online volunteer community is forewarned, and forearmed with the required tools and personnel. Perhaps Tory's independent analysis will come up with some answers (and I'm confident it will), and it may be best to wait for her report. . Kudpung กุดผึ้ง (talk) 03:33, 2 December 2011 (UTC)