Wikipedia:WikiProject United States Public Policy/Assessment/CasualObserver'48
Appearance
CasualObserver'48's PPI Assessment Page
[edit]CasualObserver'48 is classified as a Wikipedia expert
Assessment 1, part 1
[edit]The purpose of this evaluation in not to gauge variability in article quality, but to look at the metric itself. How consistent is this assessment tool? and Is there a difference in scores between subject area expert assessment and Wikipedian article assessment?
- Casual's notes on Assessment 1.1
- I am not an expert in these article areas, but received #1, knew people seeking #2, knew of expensive toilet seats from #3, unaware of #4, happily drove #5's Corvair, and long-attended a #6. I strictly followed the rubric; i.e. used only the defined scores in the ranges, and did not vary between those specifically defined. For example, in comprehensiveness, only scores of 10, 7, 4, 3 and 1 could be used; same for assessing sourcing, with 6,4,2,1 and 0 possible. I am unaware if this was what was intended in this exercise, but this was how I approached my first article assessment. This might be something to state/discuss globally within the project.
- Comprehensiveness =4/10 — I know nothing about subject
- Sourcing = 4/6 — One ref is a 'pro'
- Neutrality =2/3 — More than a 1
- Readability =3/3 — See no problems
- Illustrations =2/2 — Nothing really needed
- Formatting =2/2 — No real problems
- Total =17/26
- Comprehensiveness =7/10 — I am aware of several non-covered or contextual subjects, suffers from recentism
- Sourcing =4/6 — Many primary sources, very little coverage of some specifics for context
- Neutrality =2/3 — Reasonable
- Readability =1/3 — I became quite confused
- Illustrations =1/2 — Should have map of where from, country and period
- Formatting =1/2 — Inadequate lede, missing int links
- Total 16/26
- Comprehensiveness =3/10 — Seems very limited; it does all sorts of things not mentioned
- Sourcing = 2/6 — Limited number and type of refs
- Neutrality =2/3 — In the ball park
- Readability =2/3 — Reasonable, save comprehensiveness
- Illustrations 2/2 — Covers most bases
- Formatting =2/2 — Covers most bases
- Total =13/26
- Comprehensiveness =1/10 — Bare bones stub
- Sourcing =0/6 — But 2 ext links
- Neutrality =2/3 — What it says passes
- Readability =2/3 — Readable, but not informative
- Illustrations =2/2 — Nothing much appropriate
- Formatting =2/2 — Mostly conforms to MOS
- Total =9/26
- Comprehensiveness =7/10 — Seems to cover much of what I know/have heard
- Sourcing =1/6 — Poorly ref'd, all from the one source
- Neutrality =1/3 — Other viewpoints not included/missing
- Readability =2/3 — Reasonable
- Illustrations =1/2 — Could use more
- Formatting =1/2 Middle compliance
- Total =13/26
- Comprehensiveness =7/10 — A tough choice, based on what is there. A score of 4, based on what isn't
- Sourcing =1/6 — Very few refs used, although bibliography fills some holes
- Neutrality =3/3 — I see no problems; it says what it is
- Readability = 2/3 — Generally, but some sections are a 1
- Illustrations = 1/2 — Could do better
- Formatting = 1/2 — Generally consistent
- Total = 15/26
Assessment 1, request 2
[edit]Instructions: All articles to assess date from 1 October 2010 or previous edit. There are a couple of re-reviews, hopefully those will be fast.
- Casual's notes relative to first assessment request:
- No particular knowledge, except occasionally traveled under Fly America Act terms, and outside US before and after 9/11.
- More cognizant of Stub, Start and C-class parameters, and used full range of points.
- Possibly more critical than previous; generally read rubric from bottom up, rather than top down as before.
- Interesting - bottom up application of metric, did it work better? did you feel more confident about it? ARoth (Public Policy Initiative) (talk) 20:12, 13 October 2010 (UTC)
- Did it work better? Well, I am hoping it does, from a same-ballpark-team point of view, as well as to work for the benefit of the project number-crunching. I do follow the discussion closely, if not visibly. For the specifically discussed COBRA article and some other instances, I assessed consistently high, compared to those more experienced in such an exercise, and who know the top-down rubric-reading very well for the better quality articles. I really don't. That seems too much excruciating lawyerly detail for me; I tend more toward the spirit than the letter. Then again, we really are not working yet within that higher quality realm, and this has been recognized. After I saw and appreciated the more global Stub, Start and C-class parameter point-realm, my usage of a bottom up approach for assessing seemed a simple, logical and justifiable method to attain a more consistent result with the others. The discussion also seemed to modify their thinking, with some upward modification in their approaches. All these adjustments indicate a more realist approach—this is what we have, rather than a more ideological approach—this is what should be, for the articles we are assigned. Any other interpretations of those words are more than adequately covered by the six assessed parameters. We seem to have smoothed some systematic differences toward developing a more homogeneous consensus within which to work. How it all developed is also somewhat reassuring for all, I think.
- Interesting - bottom up application of metric, did it work better? did you feel more confident about it? ARoth (Public Policy Initiative) (talk) 20:12, 13 October 2010 (UTC)
- Did [I] feel more confident about it? The quick personal answer is yes; it worked fine for me, and I think the results will work better for you and the others. I always was confident that a consensual system would develop, could work, and we will see how it does. From an uncommon professional view, I characterize the task at hand as one of QA, with some associated QC input, over what the wiki manufactures. I feel more comfortable about it too; it remains more like a job than a personal choice, but I volunteered to be part of a solution trial. As noted before, the problem is only adjusting how the hat is worn, once the employer's headgear is properly fitted and posed; another assessor mentioned a different adjustment, and I see the discussion as positive and helpful. CasualObserver'48 (talk) 09:08, 14 October 2010 (UTC)
- Comprehensiveness 3/10: beyond preliminary intro, but far from comprehensive; more than a stub
- Sourcing 3/6: about half, some unsourced
- Neutrality 1/3: textbook only noted, AEI prose weaseled, no 'neo' before conservative
- Readability 2/3: middle
- Formatting 2/2): except poor lede on short article
- Illustrations (1–2): a picture or book graphic possible
- Total = 12/26
- Comprehensiveness 2/10: seems like much is missing compared to many PATRIOT Act pages; stubby
- Sourcing 1/6: for the primary source pdf
- Neutrality 1/3: too much unsaid; 'in the aftermath', 'among other things' and 'similar to'...a redlink
- Readability 2/3: more than 1
- Formatting 1/2: missing similar to redlink
- Illustrations 1/2: but no idea what might be appropriate; maybe a picture showing volumes of text required
- Total = 8/26
- Comprehensiveness 2/10: bare bones stub, lacks context
- Sourcing 2/6: both sources from EPA
- Neutrality 2/3: factual tone, none noted
- Readability 2/3: not a 3
- Formatting 2/2): no errors noted
- Illustrations 1/2: should have ERA logo or something; it's their birth certificate
- Total = 11/26
- Comprehensiveness 4/10): some areas, but not others
- Sourcing 0/6): none included
- Neutrality 1/3: appears to favor one side
- Readability 1/3: it could be improved
- Formatting 1/2: mid-compliance
- Illustrations 1/2: no idea what might be appropriate
- Total = 8/26
- Comprehensiveness 4/10: SWAG, but lacking knowledge-base
- Sourcing 3/6: seems about half
- Neutrality 1/3: does not cover the down-side noted in most recent recession ref, needs updating
- Readability 2/3: generally OK
- Formatting 1/2: could use better org format
- Illustrations 2/2: seems to cover basics, but needs present down-turn up-date
- Total = 13/26
Diff with previously assessed version= [13]; some format and link improvements, but no change in assessment rating. See previous.
- Comprehensiveness 3/10:
- Sourcing 2/6:
- Neutrality 2/3:
- Readability 2/3:
- Formatting 2/2:
- Illustrations 2/2:
- Total = 13/26
Diff with previously assessed version= [15]; some format and link improvements, but no change in assessment rating. See previous.
- Comprehensiveness 7/10:
- Sourcing 1/6:
- Neutrality 1/3:
- Readability 2/3:
- Formatting 1/2:
- Illustrations 1/2:
- Total = 13/26