Wikipedia:Content review/workshop
This page is currently inactive and is retained for historical reference. Either the page is no longer relevant or consensus on its purpose has become unclear. To revive discussion, seek broader input via a forum such as the village pump. |
The Content review workshop is intended to evaluate the various Wikipedia processes that assess and improve content. It seeks to clarify means and methods, reduce procedural overhead, and better attune processes outside of the mainspace with improvements in the mainspace. For now, editors are free to add general comments and ideas to the brainstorming page, while the scope of the project is more precisely defined here.
The Current Topic for discussion is What can be done to ensure that content review processes evaluate articles for verifiability and to check the sources used are reliable?
Goals
[edit]Goals of the content review processes
[edit]At one level, the goal of all the content review processes is to improve the quality of main-space articles. This is too broad a definition, though, since many other things go on that also have this goal. This section lists the goals that one or more of the specific processes listed above tries to achieve.
- Produce well-written articles with accurate, broad, and neutral content
- Improve the quality of the writing
- Train editors in how to write better articles
- Encourage article editors to participate
- Encourage reviewers to participate
- Connect experts to articles needing content review
- Identify articles that can be improved
- Encourage the use of a house style
- Establish communities of editors who are encouraged to work on content improvement
- Improve and document article standards
Goals of this workshop
[edit]This workshop has the following goals:
- Identify what is done well by which groups, and what is done badly
- Clarify the scaleability of the various processes and suggest improvements
- Assess the degree to which processes are focused on content improvement relative to non-mainspace overhead
- Identify redundancy between processes and suggest streamlining
- Come up with ideas to help the content review processes meet the goals listed above
Scope
[edit]The following is a list of some of the content review processes that exist on Wikipedia. Please add others if they are absent.
- Featured articles, featured lists, featured pictures and featured topics.
- Good articles.
- Peer review.
- Wikipedia v. 1.0. This is the origin of the Stub, Start, B and A ratings.
- WikiProjects with internal assessment procedures, such as WP:MILHIST. These typically use a Stub, Start, B, GA, A, FA system.
- Wikipedia:Requests for feedback
- The League of Copy Editors Not a review process per se, but often heavily involved in the FA and other review processes.
List of some existing problems and topics
[edit]This is a list of problems and topics that are agreed to be of interest by the participants in this workshop, and which are candidates to be the Current Topic worked on by this workshop. These are in no particular order.
Please feel free to edit this list to improve it. Please make the problem statements positive, if possible, rather than negative: that is, please state the improvement you'd like to see, rather than making a criticism of an existing process. Where possible, please try to state the problem without reference to a specific existing process.
- How can reviewers be encouraged at all forums? Peer review in particular is perennially understaffed.
- Is there a way to reduce redundancy between the FA and GA processes?
- How do we recognize the value and quality of good, short articles that are unlikely to grow?
- How can we encourage the connection of subject-matter experts with articles that need their expertise?
- How can all the processes scale? Each has its own barriers to getting ten times more productive.
- Acrimonious arguments at individual processes and in discussions between regulars at the different talk processes drain time and energy; is there a way to eliminate or reduce these source of friction?
- Can we avoid or mitigate the weaknesses of single-reviewer systems in which one user may mistakenly pass an article that does not meet the relevant standards?
- Can we avoid or mitigate the weaknesses of multiple-reviewer systems in which some undocumented and unrecorded form of consensus has to be decided on by a single individual?
- What can be done in all processes to reduce bureaucracy and instruction creep and make things easy for the content-writers wishing to use the processes?
- Can statistics be improved for any of the processes, in order to help measure their success? No process keeps complete statistics on both articles reviewed and reviewer participation, but there is great variation among existing processes.
- Are there opportunities for collaboration between the various review processes?
- Should there be a unified review hierarchy?
- What can be done to make it easier for editors to make articles comply with the somewhat technical manual of style?
- Is the current set of labels—stub, start, B and A; and GA, FA—the right set of labels for assessing article quality or state?
- Can we improve the mechanism for nominating articles for review, either for each process individually or across the project as a whole?
- What can be done to encourage in-depth review of reliability of sources in the various review processes?
Process
[edit]How this page is intended to work
[edit]The intention is for talk page discussions in this workshop to lead to ideas that can be taken to the talk pages of the relevant content review processes for further discussion and validation. On this talk page, a "Current topic" will be decided from time-to-time, in order to remain focused on particular process issues. The workshop aims to be both methodical and organic; that is, it intends to focus on processes one-by-one but also let discussion emerge naturally.
After a topic is decided on, the workshop talk page will attempt to reach consensus on a proposal to address the topic. When sufficient consensus has been reached, a summary of the proposal will be placed on this page, under "Proposals". Further discussion on the talk page may refine it; and implementation details and outstanding questions will be listed there.
Once the workshop feels all significant outstanding questions for the proposal have been resolved, the facilitator will ask the workshop to agree on whether it is time to take the proposal to the wider community via an appropriate announcement location such as the village pump, the Centralized discussion template, or the individual content review process pages. The workshop will discuss whether further proposals or new topics might change the current proposal, making it better to wait; or whether on the other hand there is some reason to propose it immediately, such as a belief that it is self-contained or a desire to avoid recommending too great a change in a single proposal.
Participation
[edit]Anyone is free to participate in this workshop. However, because of the complex nature of the issues, and because of some acrimony in prior discussions elsewhere on these topics, it is suggested that editors who wish to comment familiarize themselves with some of the discussion history, and consider committing to an extended period of involvement.
If further recruitment of editors with knowledge of specific processes is thought to be valuable, consider notifying individual editors who you feel would be good contributors to this process on their talk pages.
Facilitator
[edit]This workshop has a facilitator: currently the facilitator is Mike Christie. The role of the facilitator is to:
- help refine and improve the process
- help identify and articulate consensus
- keep the discussion moving through the process at an appropriate speed
The facilitator has no special authority beyond the agreement of the editors in this workshop to allow the facilitator to help guide the workshop. The role can be eliminated or the person performing the role changed at any time by consensus on the talk page.
Current topic proposals
[edit]The Current Topic for discussion is What can be done to ensure that content review processes evaluate articles for verifiability and to check the sources used are reliable?
A prior topic on peer review is on hold until a bot can be written to produce example output for further discussion.