Jump to content

Wikipedia:Featured article candidates/Algorithmic bias/archive1

From Wikipedia, the free encyclopedia
The following is an archived discussion of a featured article nomination. Please do not modify it. Subsequent comments should be made on the article's talk page or in Wikipedia talk:Featured article candidates. No further edits should be made to this page.

The article was archived by Sarastro1 via FACBot (talk) 21:06, 5 September 2018 [1].


Nominator(s): Owlsmcgee (talk) 19:23, 1 August 2018 (UTC)[reply]

This article is about bias in computer systems, extremely relevant to topics such as artificial intelligence, machine learning, and big data. There has been an enormous amount of interest in this topic in the media and in academia, so having a good, reliable reference on Wikipedia seems valuable. The article has gone through two *very thorough* GA reviews (see here and here) so I wanted to try to take it all the way to FA status. Owlsmcgee (talk) 19:23, 1 August 2018 (UTC)[reply]

Image review

  • Suggest scaling up the lead image
  • File:02-Sandvig-Seeing-the-Sort-2014-WEB.png: do you have a link to support the CC0 designation?
  • File:A_computer_program_for_evaluating_forestry_opportunities_under_three_investment_criteria_(1969)_(20385500690).jpg: per the Flickr tag, is a more specific tag available? Nikkimaria (talk) 12:31, 4 August 2018 (UTC)[reply]

Comment: I just have a few observations following a read-through:

  • "The term algorithmic bias describes systematic and repeatable errors that create unfair outcomes"; what is the targeted meaning of "unfair" in this context? Is it legal, cultural, societal, economic, perceived, or all of these? There's a couple of examples given but it isn't specifically defined.
  • The History section jumps from "early example of algorithmic bias" (1986) to "cases of still occur" (2018). What happened in between?
  • Other than a mention of machine learning in the Complexity section, I see almost no mention of AI, of which there have been some notable recent instances.
  • Shouldn't there be a section on testing and remediation of algorithmic bias, particularly in the context of AI?

Thanks. Praemonitus (talk) 19:23, 8 August 2018 (UTC)[reply]

@Owlsmcgee: Will you be able to answer our questions? Nikkimaria (talk) 16:37, 25 August 2018 (UTC)[reply]

Hi User:Nikkimaria, this is User:Owlsmcgee. Somehow I’ve just seen this response, as I have been traveling for the past few weeks. I assure you I will give your questionable my full attention as soon as I can return to editing in the next week. Sorry for the delay! —-174.199.19.195 (talk) 17:50, 25 August 2018 (UTC)[reply]
I'm here with more time to take in your questions. Thank you for taking the time, @Nikkimaria: and @Praemonitus:! First, all of your image questions will be tackled soon, right now I want to focus on the text.
  • "Unfair outcomes" should be understood to be unfair outcomes in any domain, be it as you list, "legal, cultural, societal, economic..." etc. Would there be a better way for me to make this clearer in the text?
    • It looks like you've just defined unfair in terms of itself. The term itself often depends on the context. I think it needs tightening down. Praemonitus (talk)
  • The History section lists the earliest known example, but the history is not intended to be a complete history of examples of bias (which could be it's own article). A sizeable amount of examples exist in the article now. What if I added a transition sentence such as, "bias in algorithms has become a more prevalent area of research after increases in processing power allowed more complex algorithms to integrate into a wider range of uses." Would that explain the gap?
    • I'm not sure. I was just noting what appears to be an obvious gap. Praemonitus (talk)
  • AI makes use of algorithms; the algorithms that create AI biases are the same algorithms described in the article. The terms are essentially exchangeable. I chose "algorithms" as the language because it is more precise, with "Artificial Intelligence" being basically a more buzz-wordy version of "collections of computer algorithms."
    • What I'm remembering is the situation where an AI is trained via some means, and thereby acquires biases.[2] That's different than an algorithm with an encoded bias. The complexity of AI is reaching a point where a simple coding fix might not be possible because we don't fully understand how it works. Thanks. Praemonitus (talk)
  • As for Praemonitus' suggestion on testing and remediation, I agree this is a useful section to include, and I've looked for pieces that might describe it but have come up empty handed. There is certainly a way to test to see if an algorithm is making mistakes, which doesn't quite belong in this article as mistakes are not the same as biased. But there is not a wide, scaleable means of determining whether an algorithm is fair. Such techniques are determined on an individual basis, in ways that are extremely subjective to the people responsible for the algorithms. Right now the articles does have a very small nod to the creation of the FAT-ML consortium to tackle these problems, would it be enough to find a few more examples?
Thanks for all the feedback, everyone. --Owlsmcgee (talk) 04:33, 29 August 2018 (UTC)[reply]

Closing comment: Given that this has been open for over a month, and collected minimal review and no support, I think the best course for the article is to archive this FAC now. It can be renominated after the usual two-week waiting period, but I would recommend trying to get some eyes on it beforehand, either by approaching a few reviewers or placing it at WP:PR. Sarastro (talk) 21:06, 5 September 2018 (UTC)[reply]

The above discussion is preserved as an archive. Please do not modify it. No further edits should be made to this page.