Wikipedia talk:Wikipedia Signpost/2015-08-19/Blog
Discuss this story
users were placed into categories based on their rates of editing
@JSutherland (WMF), interesting article. Can you clarify whether these categories were based on edits to WP as a whole each day or edits to the Shooting article in specific? The most curious finding to my eyes was the comparative lack of edits in the verdict's traffic spike, which is kind of amazing and worth communicating to those worried about editor dropoff. On one hand, it's logical that there's not much new info to add apart from the verdict itself, but one would think that the pure number of eyeballs (traffic) would remind more editors to fix citations and clean the whole thing up, similar to how it first started. (Was the page protected in the traffic spike periods? Would be worth noting.) A bunch of empty pages are enticing to edit, especially to users who may know something about the topic, but once an article starts to build up, there's less a reason (or "need") to bulk it up. I'd also posit that featured articles, with their length and "brilliant" verbosity, look so pristine that editors are naturally discouraged from touching them, feeling no need to disrupt the order. If you do continue with the qualitative analysis, I would be curious whether the WMF finds that people read more/less of an article when it reaches that saturation state, or if it just makes them read the lede and skim one or two relevant parts instead of reading as much as they would of a smaller article. I've written lots of peer reviewed content here and I find that my own eyes glaze over at the walls of perfectly cited text, so I've come to prefer concision over completeness. It's one thing to worry about editor participation (as tied to rate of edits) and another to question how content quality actually affects the end product: readers reading. – czar 18:47, 21 August 2015 (UTC)
- Since this is a study conducted in my volunteer time (indeed, as part of my studies!), I'll reply with this account. Thanks for your questions. I should start by saying that this study was addressed to journalism professors and not those with much technical background, so a lot of the technical aspects are explained in as simple terms as I can (which means I purposely avoided taking about page protection and so forth, which are niche Wikipedia terms).
- The protection hypothesis is a good one though. I imagine the edit rates remaining relatively low even in high-traffic periods might also have to do with the divide which currently exists between "readers" and "editors"—still an incredibly high number of readers don't even realise they can edit in the first place, and this correlation (or lack thereof) may suggest the divide is more obvious than we think. That's for another study though. ;) It's definitely an idea to do a more technical, quantitative study in the future to look at the more Wikipedia-specific issues that arise from this work. — foxj 19:07, 21 August 2015 (UTC)
- Just realised I never actually responded to your question! The editors were placed into these categories based on their overall editing history. The metric is intentionally simple and does not take into account long periods of inactivity, and is likely to be skewed by the use of automated tools; this was the easiest method, however, of organising users in a way that is accessible to the professors I was targeting the dissertation to. Hope that makes sense. (The editors whose edits were studied in the dissertation are listed at the end—there were more than 600, so it's a long list. ;) ) — foxj 19:10, 21 August 2015 (UTC)
← Back to Blog