Wikipedia talk:Research help/Pilot report
opinion
[edit]in regards to the "conclusion" section...The sample size of survey respondents may be deemed too small to draw strong conclusions, and we actually support that view. We need more data., is something would agree with--Ozzie10aaaa (talk) 21:38, 14 September 2016 (UTC)
- That was one of the greatest challenges here. We originally proposed a larger sample, but ended up having to go with a smaller one. And unfortunately, this is one of those design tweaks that is only best tested in the Wild, so to speak. 22:04, 14 September 2016 (UTC)
- @Ozzie10aaaa:, Astinson (WMF) (talk) 22:05, 14 September 2016 (UTC)
- yes, in terms of the "Research help/pilot study" a suggestion would be to have had it(or in the future have it) run for a longer period of time, to therefore increase the possibility of a greater sample size...IMO--Ozzie10aaaa (talk) 22:47, 14 September 2016 (UTC)
- @Ozzie10aaaa:, Astinson (WMF) (talk) 22:05, 14 September 2016 (UTC)
- That was one of the greatest challenges here. We originally proposed a larger sample, but ended up having to go with a smaller one. And unfortunately, this is one of those design tweaks that is only best tested in the Wild, so to speak. 22:04, 14 September 2016 (UTC)
- yes so few people answered the survey that there is no point analyzing the results much. In terms of data, it would have been useful to see what percent of people who read an article, clicked through to the research page. Just having the bare number of click throughs is not very helpful. Jytdog (talk) 00:17, 15 September 2016 (UTC)
Second round pilot? What do you think about the strategy?
[edit]One of the main points of feedback we would really appreciate, is on the structure and strategy for the Stage 2 proposal. Does it make sense? How should we go about the process? Astinson (WMF) (talk) 22:06, 14 September 2016 (UTC)
General response
[edit]I would agree with the analysis, by and large:
- There was opposition to the placement of the link.
- There was a mixed, but generally positive reaction to the content of the page.
- The sample size for the survey was very small, and subject to unknown confounds.
I would like to see a more dynamic approach to a second pilot: with a left-bar link (where it doesn't conflict wit the existing tools - which themselves could be closed up where needed). This could be useful to add similar links to other sections, such as External links.
All the best: Rich Farmbrough, 11:59, 16 September 2016 (UTC).
- Thanks for the feedback @Rich Farmbrough:. Though I appreciate the instinct to try to use the left hand column, as I have mentioned in a few other places: best practice in current web development, does not use the left hand columns. I am checking with some of our researchers at WMF to see if this assumption is true in our context. I will have an update, and if their isn't any research, I will followup with you, and see if we can do a pilot, as a way of testing. Astinson (WMF) (talk) 18:57, 21 September 2016 (UTC)
- I talked with our Readership research expert, and he said he is not familiar with any studies in the Wikimedia context. We are going to flesh up a demo image for the next survey which includes a link like you describe, and see if that gets better pickup than in the existing survey (bigger sample size, not just people that bring themselves to the page). We are seeing if we can identify a set of users like that soon. Astinson (WMF) (talk) 16:59, 23 September 2016 (UTC)
Plagiarism and Attribution?
[edit]This page uses a number of quotes from public on-wiki discussion but does not attribute the sources; attribution is required for copyright. Copyright issues aside, research ethics dictate that you must cite your sources and quotations, failure to do so is plagiarism. Considering these are published, publicly available statements, they should be cited because they are someone else's ideas and they deserve credit for them. Wugapodes [thɔk] [ˈkan.ˌʧɻɪbz] 13:58, 16 September 2016 (UTC)
Feedback on methodology
[edit]To give actual feedback, I think Phase 2 should be designed to better answer your research questions. For example, your third research question--"Did [readers] think that a link on articles to the page would be helpful to Wikipedia readers?"--went unanswered. While those who clicked on the link and answered your survey seemed to think so, the sample size was far too small, and there is a clear selection bias in this case because people who took the time to click the link would obviously be more inclined to say such a thing would be useful than those who didn't click the link. A randomized survey of readers that does not require clicking the link needs to be done to answer this question. Indeed, I think this is the linchpin of the entire project because if readers don't find this useful, it doesn't matter how it looks or where it goes or how many people click on the link.
Another question that went unanswered is how many people saw the link and did not click on it? You give the statistic that the page was viewed ~25,000 times but that is largely decontextualized. How are we to know whether that is a big number or a small number? What is the average number of page views for the 10,000 source pages the link was placed on? If those 10,000 pages over the research period had 26,000 views, then the pilot was incredibly successful; but if they had say 25,000,000 views collectively (~23 page views per page per day, so not unreasonable) then that is much less impressive as <1% of page views resulted in a click on the link.
For your two other research questions--"Did readers of the Research help page find its content useful?" and "Where did readers of the page think such a link should be located?"--I think you did an adequate job of probing that, but just need a larger sample size. I will say though that there is probably a mild selection bias in the "Where did readers of the page think such a link should be located?" question. If respondents clicked on a link in the references section to get to the survey, they probably thought that was a reasonable location for it (or were at least primed for such a response). A good way to deal with this bias would be to include this as a question in the randomized survey of readers I suggested above.
In general, I think your research questions are best served not by an expanded pilot where the link is placed on more pages or in different places. I think you need to show that this is a feature that readers actually would find useful through a survey, and use your existing findings to corroborate that the previous pilot shows enough effectiveness to merit a second, expanded pilot. Wugapodes [thɔk] [ˈkan.ˌʧɻɪbz] 14:29, 16 September 2016 (UTC)
- @Wugapodes: Thanks for the feedback on the process here. I think your critique is all very fair, and spot on.
- One thing I want to note: we know, especially in mobile, that click through on Wikilinks greatly diminishes the further down article we go: meta:Research:Which_parts_of_an_article_do_readers_read, thus we expect only a small fraction of readers to be in the right place for clicking through (that is a caveat of this method). However, for our ideal audience: people doing some type of systematic research with Wikipedia (librarians, educators, students), this is where we expect them to show up: so in part, we made this location choice, knowing that it is only going to educate a fraction of readers engaged in the articles. Our hope is that the page is sufficiently useful and easily findable, that like many of the survey respondents, the readers feel like they can share the page and information on the page with others in educational/learning settings. Astinson (WMF) (talk) 18:44, 21 September 2016 (UTC)
- @Astinson (WMF): Well that was an interesting article on reader behavior! Thanks for linking to it. I think though an analysis of how effective it is still is possible, and contextualizing the click data is necessary. Looking at the data from the link you gave, you could probably come up with some cumulative probability function for the liklihood of a reader making it to the top of the reference section. For example, the image at right is basically that already and so we would expect about 75% of readers to make it to the top 1/8 (eyeballing it here) and 25% of readers to make it to the bottom 1/4 of a page. This can then be used to come up with a scaled estimate of how many people likely saw the link, then subtracting the average gives the deviation. Wugapodes [thɔk] [ˈkan.ˌʧɻɪbz] 19:35, 21 September 2016 (UTC)
- @Wugapodes: Unfortunately that is only mobile reader data, as far as I can tell, we don't have much else that is consistent or at scale enough for us to be 100% sure (we could do some rough comparisons, to see what it looks like). I think the better comparison, is expected average daily visibility, as compared to other pages (such WP:About with high impact, or other informational tools that we have on Wikipedia. Our hope is to create something that educates a subsection of our readers, meaningfully, so that they become better advocates. Comparing to other pages with similar intentions, might be a good strategy. The other side of it: even if it only reaches a small group, with some fall off as its not new to folks, educating any subsection of users should, have corollaries (for instance if one out of every 20 people who see the page, and decide to edit in the next few months, because they are better prepared, that would completely skew our active editors in a month.). Though I get why that would make the data more persuasive, I am not entirely persuaded that optimizing for visibility, should be the only criteria. Astinson (WMF) (talk) 16:56, 23 September 2016 (UTC)
- @Astinson (WMF): Well that was an interesting article on reader behavior! Thanks for linking to it. I think though an analysis of how effective it is still is possible, and contextualizing the click data is necessary. Looking at the data from the link you gave, you could probably come up with some cumulative probability function for the liklihood of a reader making it to the top of the reference section. For example, the image at right is basically that already and so we would expect about 75% of readers to make it to the top 1/8 (eyeballing it here) and 25% of readers to make it to the bottom 1/4 of a page. This can then be used to come up with a scaled estimate of how many people likely saw the link, then subtracting the average gives the deviation. Wugapodes [thɔk] [ˈkan.ˌʧɻɪbz] 19:35, 21 September 2016 (UTC)
Some comments
[edit]I just learned of this project, which is why I'm providing input so late in the exercise.
First, speaking as an experienced & long-term Wikipedia contributor I believe something like this is long overdue. Having watched articles on Wikipedia evolve over the last 13-14 years, I have come to the conclusion that no school system anywhere in the world does a satisfactory job of teaching its students how to perform research. A generation ago, when I was in school, we were pointed to the library's card catalog & the Reader's Guide to Periodical Literature & told to have at it -- which works if one has picked out a subject that is explicitly mentioned in either. Now kids are pointed to Google where they pull information from the first few pages of hits on the subject of their query. Can there be any question why kids for generations have repeatedly turned to the encyclopedia for their research?
Which is why I would change the caption to the top photo from "Wikipedia makes a great starting point for research, but it needn't be your end point" to "Wikipedia makes a great starting point for research, but it should not be your end point". Unless one merely needs confirmation of a simple fact (e.g. when a famous person was born or died, the population of a city, specific values for setting up a netmask under IPv6), all encyclopedias are merely the starting point for research, a place to get an overview of the subject & map out strategy for the serious exploration.
But as for further testing, I had an idea. Since most of the people who need advice about how to use Wikipedia will be newbies, we could conclude most of them would not have an account; they would not be reading Wikipedia while logged in. Why not run a test -- say a week, at most -- where a banner with a summary about this page & a link to it appears on every page where the user is not logged in? I know the website engineers have that ability to identify browsers that are not using an account, so it is doable. And IIRC, that part of the webpage has been jealously guarded by the WMF as their property. This might be a way to reach most of its target audience. -- llywrch (talk) 16:53, 20 September 2016 (UTC)
- @Llywrch: Thanks the thoughtful response: we actually, also want to catch new editors too; but your proposal of a test through some type of controlled temporary placement makes a lot sense. Part of the problem, is that its a change to the content in the mainspace, which historically has not been a space that WMF can engage directly (hence why we are trying to work through community processes). Running something like a fundraising banner, might be an interesting strategy for exposure, I am not sure if it would be an effective way to test the impact of the placement though (which is where most of the concern comes from community members). Astinson (WMF) (talk) 15:07, 21 September 2016 (UTC)
- I think I have enunciated this elsewhere: it very much depends on what one means by "research". Very often Wikipedia provides all the information one requires (Zipf's law probably applies). Of course as Wikipedians researching for Wikipedia this is less often the case. The meaning of the word encyclopaedic does after all imply "completeness" rather than "superficiality".
- Certainly if one wishes to become a sufficiently specialised specialist other resources will be needed. This though may be a moving target. There are subjects in which I considered myself reasonably well versed (perhaps enough to write an introductory volume) where Wikipedia eclipses my personal knowledge.
- Thus I am somewhat chary of the value laden "shouldn't" prevailing over "needn't".
- All the best: Rich Farmbrough, 19:47, 22 September 2016 (UTC).