Jump to content

Algorithmic accountability

From Wikipedia, the free encyclopedia

Algorithmic accountability refers to the allocation of responsibility for the consequences of real-world actions influenced by algorithms used in decision-making processes.[1]

Ideally, algorithms should be designed to eliminate bias from their decision-making outcomes. This means they ought to evaluate only relevant characteristics of the input data, avoiding distinctions based on attributes that are generally inappropriate in social contexts, such as an individual's ethnicity in legal judgments. However, adherence to this principle is not always guaranteed, and there are instances where individuals may be adversely affected by algorithmic decisions. Responsibility for any harm resulting from a machine's decision may lie with the algorithm itself or with the individuals who designed it, particularly if the decision resulted from bias or flawed data analysis inherent in the algorithm's design.[2]

Algorithm usage

[edit]

Algorithms are widely utilized across various sectors of society that incorporate computational techniques in their control systems. These applications span numerous industries, including but not limited to medical, transportation, and payment services.[3] In these contexts, algorithms perform functions such as:[4]

  • Approving or denying credit card applications;
  • Counting votes in elections;
  • Approving or denying immigrant visas;
  • Determining which taxpayers will be audited on their income taxes;
  • Managing systems that control self-driving cars on a highway;
  • Scoring individuals as potential criminals for use in legal proceedings.

However, the implementation of these algorithms can be complex and opaque. Generally, algorithms function as "black boxes," meaning that the specific processes an input undergoes during execution are often not transparent, with users typically only seeing the resulting output.[5] This lack of transparency raises concerns about potential biases within the algorithms, as the parameters influencing decision-making may not be well understood. The outputs generated can lead to perceptions of bias, especially if individuals in similar circumstances receive different results. According to Nicholas Diakopoulos:

But these algorithms can make mistakes. They have biases. Yet they sit in opaque black boxes, their inner workings, their inner “thoughts” hidden behind layers of complexity. We need to get inside that black box, to understand how they may be exerting power on us, and to understand where they might be making unjust mistakes

Wisconsin Supreme Court case

[edit]

Algorithms are prevalent across various fields and significantly influence decisions that affect the population at large. Their underlying structures and parameters often remain unknown to those impacted by their outcomes. A notable case illustrating this issue is a recent ruling by the Wisconsin Supreme Court concerning "risk assessment" algorithms used in criminal justice.[3] The court determined that scores generated by such algorithms, which analyze multiple parameters from individuals, should not be used as a determining factor for arresting an accused individual. Furthermore, the court mandated that all reports submitted to judges must include information regarding the accuracy of the algorithm used to compute these scores.

This ruling is regarded as a noteworthy development in how society should manage software that makes consequential decisions, highlighting the importance of reliability, particularly in complex settings like the legal system. The use of algorithms in these contexts necessitates a high degree of impartiality in processing input data. However, experts note that there is still considerable work to be done to ensure the accuracy of algorithmic results. Questions about the transparency of data processing continue to arise, which raises issues regarding the appropriateness of the algorithms and the intentions of their designers.[citation needed]

Controversies

[edit]

A notable instance of potential algorithmic bias is highlighted in an article by The Washington Post[6] regarding the ride-hailing service Uber. An analysis of collected data revealed that estimated waiting times for users varied based on the neighborhoods in which they resided. Key factors influencing these discrepancies included the predominant ethnicity and average income of the area.

Specifically, neighborhoods with a majority white population and higher economic status tended to have shorter waiting times, while those with more diverse ethnic compositions and lower average incomes experienced longer waits. It’s important to clarify that this observation reflects a correlation identified in the data, rather than a definitive cause-and-effect relationship. No value judgments are made regarding the behavior of the Uber app in these cases.

In a separate analysis published in the "Direito Digit@l" column on the Migalhas website, authors Coriolano Almeida Camargo and Marcelo Crespo examine the use of algorithms in decision-making contexts traditionally handled by humans. They discuss the challenges in assessing whether machine-generated decisions are fair and the potential flaws that can arise in this validation process.

The issue transcends and will transcend the concern with which data is collected from consumers to the question of how this data is used by algorithms. Despite the existence of some consumer protection regulations, there is no effective mechanism available to consumers that tells them, for example, whether they have been automatically discriminated against by being denied loans or jobs.

The rapid advancement of technology has introduced numerous innovations to society, including the development of autonomous vehicles. These vehicles rely on algorithms embedded within their systems to manage navigation and respond to various driving conditions. Autonomous systems are designed to collect data and evaluate their surroundings in real time, allowing them to make decisions that simulate the actions of a human driver.

In their analysis, Camargo and Crespo address potential issues associated with the algorithms used in autonomous vehicles. They particularly emphasize the challenges related to decision-making during critical moments, highlighting the complexities and ethical considerations involved in programming such systems to ensure safety and fairness.

The technological landscape is rapidly changing with the advent of very powerful computers and algorithms that are moving toward the impressive development of artificial intelligence. We have no doubt that artificial intelligence will revolutionize the provision of services and also industry. The problem is that ethical issues urgently need to be thought through and discussed. Or are we simply going to allow machines to judge us in court cases? Or that they decide who should live or die in accident situations that could be intervened by some technological equipment, such as autonomous cars?

In TechCrunch website, Hemant Taneja wrote:[7]

Concern about “black box” algorithms that govern our lives has been spreading. New York University’s Information Law Institute hosted a conference on algorithmic accountability, noting: “Scholars, stakeholders, and policymakers question the adequacy of existing mechanisms governing algorithmic decision-making and grapple with new challenges presented by the rise of algorithmic power in terms of transparency, fairness, and equal treatment.” Yale Law School’s Information Society Project is studying this, too. “Algorithmic modeling may be biased or limited, and the uses of algorithms are still opaque in many critical sectors,” the group concluded.

Possible solutions

[edit]

Discussions among experts have sought viable solutions to understand the operations of algorithms, often referred to as "black boxes." It is generally proposed that companies responsible for developing and implementing these algorithms should ensure their reliability by disclosing the internal processes of their systems.

Hemant Taneja, writing for TechCrunch, emphasizes that major technology companies, such as Google, Amazon, and Uber, must actively incorporate algorithmic accountability into their operations. He suggests that these companies should transparently monitor their own systems to avoid stringent regulatory measures.[7]

One potential approach is the introduction of regulations in the tech sector to enforce oversight of algorithmic processes. However, such regulations could significantly impact software developers and the industry as a whole. It may be more beneficial for companies to voluntarily disclose the details of their algorithms and decision-making parameters, which could enhance the trustworthiness of their solutions.

Another avenue discussed is the possibility of self-regulation by the companies that create these algorithms, allowing them to take proactive steps in ensuring accountability and transparency in their operations.[7]

In TechCrunch website, Hemant Taneja wrote:[7]

There’s another benefit — perhaps a huge one — to software-defined regulation. It will also show us a path to a more efficient government. The world’s legal logic and regulations can be coded into software and smart sensors can offer real-time monitoring of everything from air and water quality, traffic flows and queues at the DMV. Regulators define the rules, technologist create the software to implement them and then AI and ML help refine iterations of policies going forward. This should lead to much more efficient, effective governments at the local, national and global levels.

See also

[edit]

References

[edit]
  1. ^ Shah, H. (2018). "Algorithmic accountability". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 376 (2128): 20170362. Bibcode:2018RSPTA.37670362S. doi:10.1098/rsta.2017.0362. PMID 30082307. S2CID 51926550.
  2. ^ Kobie, Nicole. "Who do you blame when an algorithm gets you fired?". Wired. Retrieved March 2, 2023.
  3. ^ a b Angwin, Julia (August 2016). "Make Algorithms Accountable". The New York Times. Retrieved March 2, 2023.
  4. ^ Kroll; Huey; Barocas; Felten; Reidenberg; Robinson; Yu (2016). Accountable Algorithms. University of Pennsylvania. SSRN 2765268.
  5. ^ "Algorithmic Accountability & Transparency". Nick Diakopoulos. Archived from the original on January 21, 2016. Retrieved March 3, 2023.
  6. ^ Stark, Jennifer; Diakopoulos, Nicholas (March 10, 2016). "Uber seems to offer better service in areas with more white people. That raises some tough questions". The Washington Post. Retrieved March 2, 2023.
  7. ^ a b c d Taneja, Hemant (8 September 2016). "The need for algorithmic accountability". TechCrunch. Retrieved March 4, 2023.

Bibliography

[edit]
  • Kroll, Joshua A.; Huey, Joanna; Barocas, Solon; Barocas, Solon; Felten, Edward W.; Reidenberg, Joel R.; Robinson, David G.; Robinson, David G.; Yu, Harlan (2016) Accountable Algorithms. University of Pennsylvania Law Review, Vol. 165. Fordham Law Legal Studies Research Paper No. 2765268.