paint-brush
Meta Article on the discussion about decision-making algorithms (Part 1)by@ethicaldata
368 reads
368 reads

Meta Article on the discussion about decision-making algorithms (Part 1)

by Paul HuntJune 4th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

[Note: more article summaries and updates to the information below at <a href="http://theprincipledalgorithm.com/index.php/2017/06/03/article-summary-discussion-around-decision-making-algorithms/" target="_blank">The Principled Algorithm</a>]

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Meta Article on the discussion about decision-making algorithms (Part 1)
Paul Hunt HackerNoon profile picture

http://sloanreview.mit.edu/article/what-to-expect-from-artificial-intelligence/

[Note: more article summaries and updates to the information below at The Principled Algorithm]

2014, HBR.org: A Process for Human-Algorithm Decision Making

  • Organisational decision making = collect facts > list of options > make choice > take action
  • analytics (i.e. an algorithmic process) can automate some of this process
  • re-design processes and fit peoples’ roles around it
  • “increase decision process efficiency by as much as 25%”
  • can lead to extensive organisational transformation

2017, NPR.org: Will Algorithms Erode Our Decision-Making Skills?

  • cites Pew Research Center report
  • worried about algorithms reducing our ability to make decisions; that we could become too reliant on technology
  • “algorithms are the new arbiters of human decision-making in almost any area we can imagine”
  • dehumanisation of people as ‘inputs’ into the process
  • goes on to quote participants in the Pew report

2017, Farnam Street Blog: Do Algorithms Beat Us at Complex Decision Making?

  • cites Paul Meehl book from 1954: finds that “data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria”
  • “Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.”
  • quotes liberally from Kahneman “Thinking, Fast and Slow

2016, article19.org: Algorithms and automated decision-making in the context of crime prevention

  • article is an executive summary of a linked report
  • in sum: “ARTICLE 19 believes that it is important to ensure that human rights are protected in the context of algorithmic decision-making”
  • uses many examples in industry of automated processing by algorithms

2017, PublicTechnology.net: Algorithms in decision-making inquiry: Stephanie Mathisen on challenging MPs to investigate accountability

  • “algorithms have come to replace humans in making decisions that affect many aspects of our lives, and on a scale that is capable of affecting society profoundly”
  • “what’s different about computer algorithms is the sheer volume and complexity of the data that can be factored in to decisions, and the potential to apply error and discrimination systematically”
  • “lack of transparency around algorithms is a serious issue”
  • “algorithms are also only as unbiased as the data they draw on”
  • “a suggested code of conduct was published in November last year, including five principles of good algorithms: responsibility, explainability, accuracy, auditability and fairness” [also refer to MIT Technology article below — How To Hold Algorithms Accountable”]

2017, The Guardian: AI watchdog needed to regulate automated decision-making, say experts

  • “artificial intelligence watchdog should be set up to make sure people are not discriminated against”
  • “the systems can, and do, make bad decisions that seriously impact people’s lives”
  • “in Britain, the Data Protection Act allows automated decisions to be challenged”
  • “the final version [of the EU GDPR] approved last year contains no legal guarantee [to a ‘right to explanation’]”
  • “may find it hard to police algorithms” … “because some modern AI methods, such as deep learning, are ‘fundamentally inscrutable’”
  • “decisions taken by algorithms will need to be explained in different ways depending on what they do”
  • various examples noted of algorithms gone bad

2017, BOSSEmergingLeaders.com.au: Do algorithms make better decisions?

  • “such [deep learning] algorithms only work where the problem domain is well-understood and training data is available”
  • “they require a stable environment where future patterns are similar to past ones”
  • “machine-learning algorithms are as unbiased as the data with which they were [trained]”
  • “no-one knows, not even the creators of these algorithms, how exactly these algorithms reach their decisions”
  • “entrusting decisions to such algorithms would mean that we transfer accountability for decisions to those in charge of training them, effectively outsourcing our ethics”
  • “ironically, because of the grounding in past data, this supposedly disruptive technology cannot cope well with disruptive change”

2017, boingboing.net: Algorithmic decision-making: an arms-race between entropy, programmers and referees

  • “‘entropic forces’ make algorithmic decision-making tools worse over time, requiring that they be continuously maintained and improved”
  • “kind of taxonomy of the kinds of problems that machine learning can be safely deployed against, given these asymmetries”
  • article based off Nesta’s Juan Mateos-Garcia paper

2016, MIT Technology Review: How to Hold Algorithms Accountable

  • “given the literally life-altering nature of these algorithmic decisions, they should receive careful attention and be held accountable for negative consequences”
  • “results [of machine learning algorithms] are shaped by human-made design decisions, rules about what to optimize, and choices about what training data to use”
  • article considers “accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness”

2016, ZDNet: Inside the black box: Understanding AI decision-making

  • “what is happening right now, at an increasing pace, is the application of AI algorithms to all manner of processes that can significantly affect peoples’ lives — at work, at home and as they travel around”
  • “many of these algorithms are not open to scrutiny”
  • “key to the training is a process called ‘back propagation’, in which labelled examples are fed into the system and intermediate-layer settings are progressively modified until the output layer provides an optimal match to the input layer”
  • “we believe this curated way, where a human looks at the material and has the final call, is the right way to do it for critical applications”
  • “you’ll want to visualise what happens on the layers and how they engage with the data, and make it more transparent which piece of the evidence led to which decision, so that the network not only produces a result, but also points out the evidence and the reasoning process”
  • “we should pay attention to what people might do with today’s AI technology”

2016, Pro Publica: Making Algorithms Accountable

  • calls for due process in data-based decision-making. References article by Kate Crawford, and mentions the EU GDPR, in which the right to obtain an explanation is “likely to affect a narrow class of automated decisions”
  • White House has “called for automated decision-making tools to be tested for fairness, and for the development of ‘algorithmic auditing’”
  • describes their finding of bias in a system by Northpointe [Note: seems they may have changed their name to, or have been bought out by, Equivant]
  • “yet as we rapidly enter the era of automated decision making, we should demand more than warning labels”

2016, Fusion: EU citizens might get a ‘right to explanation’ about the decisions algorithms make

  • “citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them”
  • “Calo explained over email how companies that use algorithms could pretty easily sidestep the new regulation”
  • “interpreting the decisions algorithms make is only going to get more difficult as the systems they rely on (e.g. neural networks) become more complex”

2016, The Conversation: Here’s how we can protect ourselves from the hidden algorithms that influence our lives

  • “algorithms can be programmed to be biased or unintentional bias can creep into the system”
  • “recent calls by the UK Labour party for greater regulation not just of tech firms but of the algorithms themselves
  • “algorithms are usually commercially sensitive and highly lucrative”
  • “the focus on regulation would need to shift to the inputs and the outputs of the algorithm”
  • “companies must be able to use their own algorithms as they see fit, with accountability for their misuse coming after the event”
  • “public at large remain generally unaware of these legal methods [that “people are able to object to automated decision making if such decisions have a significant impact on them”] to control corporate activities”
  • “a new, over-arching uber-regulator would be excessively costly, unwieldy and of limited impact”

2015, Slate Future Tense: The Policy Machine

  • “according to legal scholar Danielle Keats Citron, automated decision-making systems like predictive policing or remote welfare eligibility no longer simply help humans in government agencies apply procedural rules; instead, they have become primary decision-makers in public policy
  • “algorithmic decision-making takes on a new level of significance when it moves beyond sifting your search results and into the realm of public policy”
  • “they also raise issues of equity and fairness, challenge existing due process rules, and can threaten Americans’ well-being”
    1. We need to learn more about how policy algorithms work
    1. We need to address the political context of algorithms
    1. We need to address how cumulative disadvantage sediments in algorithms
    1. We need to respect constitutional principles, enforce legal rights, and strengthen due process procedures
  • “decision-making algorithms are a form of politics played out at a distance, generating a troubling amount of emotional remove”

2013, ABC RN Rear View (AU): Future of drone strikes could see execution by algorithm

  • “Pentagon is discussing the possibility of replacing human drone operators with computer algorithms”
  • “there are already fears that the roving killing machines could be automated in the future”
  • “the way that drones are used to conduct warfare is stretching the limits of previous international conventions and is likely to require new rules of engagement to be drawn up”
  • “drones are not just becoming autonomous, they’re also becoming cooperative, smaller, and more agile”
  • “drones target individuals in very precise locations”
  • “the big question about drones is do they change the psychology of the people who are making the decisions to deploy lethal force? And I think a lot of people at this point would have to answer yes”

2014, Medium: How big data is unfair

  • “if the training data reflect existing social biases against a minority, the algorithm is likely to incorporate these biases”
  • “race and gender, for example, are typically redundantly encoded in any sufficiently rich feature space whether they are explicitly present or not”
  • “it’s true by definition that there is always proportionately less data available about minorities”
  • “differences in classification accuracy between different groups is a major and underappreciated source of unfairness”
  • “achieving fairness might be computationally expensive if it forces us to look for more complex decision rules”