[Note: more article summaries and updates to the information below at <a href="http://theprincipledalgorithm.com/index.php/2017/06/03/article-summary-discussion-around-decision-making-algorithms/" target="_blank">The Principled Algorithm</a>]
cites Paul Meehl book from 1954: finds that “data-driven algorithms could better predict human behavior than trained clinical psychologists — and with much simpler criteria”
“Given the same set of data twice, we make two different decisions. Noise. Internal contradiction.”
“algorithms have come to replace humans in making decisions that affect many aspects of our lives, and on a scale that is capable of affecting society profoundly”
“what’s different about computer algorithms is the sheer volume and complexity of the data that can be factored in to decisions, and the potential to apply error and discrimination systematically”
“lack of transparency around algorithms is a serious issue”
“algorithms are also only as unbiased as the data they draw on”
“a suggested code of conduct was published in November last year, including five principles of good algorithms: responsibility, explainability, accuracy, auditability and fairness” [also refer to MIT Technology article below — How To Hold Algorithms Accountable”]
“such [deep learning] algorithms only work where the problem domain is well-understood and training data is available”
“they require a stable environment where future patterns are similar to past ones”
“machine-learning algorithms are as unbiased as the data with which they were [trained]”
“no-one knows, not even the creators of these algorithms, how exactly these algorithms reach their decisions”
“entrusting decisions to such algorithms would mean that we transfer accountability for decisions to those in charge of training them, effectively outsourcing our ethics”
“ironically, because of the grounding in past data, this supposedly disruptive technology cannot cope well with disruptive change”
“given the literally life-altering nature of these algorithmic decisions, they should receive careful attention and be held accountable for negative consequences”
“results [of machine learning algorithms] are shaped by human-made design decisions, rules about what to optimize, and choices about what training data to use”
article considers “accountability through the lens of five core principles: responsibility, explainability, accuracy, auditability, and fairness”
“what is happening right now, at an increasing pace, is the application of AI algorithms to all manner of processes that can significantly affect peoples’ lives — at work, at home and as they travel around”
“many of these algorithms are not open to scrutiny”
“key to the training is a process called ‘back propagation’, in which labelled examples are fed into the system and intermediate-layer settings are progressively modified until the output layer provides an optimal match to the input layer”
“we believe this curated way, where a human looks at the material and has the final call, is the right way to do it for critical applications”
“you’ll want to visualise what happens on the layers and how they engage with the data, and make it more transparent which piece of the evidence led to which decision, so that the network not only produces a result, but also points out the evidence and the reasoning process”
“we should pay attention to what people might do with today’s AI technology”
calls for due process in data-based decision-making. References article by Kate Crawford, and mentions the EU GDPR, in which the right to obtain an explanation is “likely to affect a narrow class of automated decisions”
White House has “called for automated decision-making tools to be tested for fairness, and for the development of ‘algorithmic auditing’”
describes their finding of bias in a system by Northpointe [Note: seems they may have changed their name to, or have been bought out by, Equivant]
“yet as we rapidly enter the era of automated decision making, we should demand more than warning labels”
“citizens of EU member states might soon have a way to demand explanations of the decisions algorithms about them”
“Calo explained over email how companies that use algorithms could pretty easily sidestep the new regulation”
“interpreting the decisions algorithms make is only going to get more difficult as the systems they rely on (e.g. neural networks) become more complex”
“recent calls by the UK Labour party for greater regulation not just of tech firms but of the algorithms themselves”
“algorithms are usually commercially sensitive and highly lucrative”
“the focus on regulation would need to shift to the inputs and the outputs of the algorithm”
“companies must be able to use their own algorithms as they see fit, with accountability for their misuse coming after the event”
“public at large remain generally unaware of these legal methods [that “people are able to object to automated decision making if such decisions have a significant impact on them”] to control corporate activities”
“a new, over-arching uber-regulator would be excessively costly, unwieldy and of limited impact”
“according to legal scholar Danielle Keats Citron, automated decision-making systems like predictive policing or remote welfare eligibility no longer simply help humans in government agencies apply procedural rules; instead, they have become primary decision-makers in public policy”
“algorithmic decision-making takes on a new level of significance when it moves beyond sifting your search results and into the realm of public policy”
“they also raise issues of equity and fairness, challenge existing due process rules, and can threaten Americans’ well-being”
We need to learn more about how policy algorithms work
We need to address the political context of algorithms
We need to address how cumulative disadvantage sediments in algorithms
We need to respect constitutional principles, enforce legal rights, and strengthen due process procedures
“decision-making algorithms are a form of politics played out at a distance, generating a troubling amount of emotional remove”
“Pentagon is discussing the possibility of replacing human drone operators with computer algorithms”
“there are already fears that the roving killing machines could be automated in the future”
“the way that drones are used to conduct warfare is stretching the limits of previous international conventions and is likely to require new rules of engagement to be drawn up”
“drones are not just becoming autonomous, they’re also becoming cooperative, smaller, and more agile”
“drones target individuals in very precise locations”
“the big question about drones is do they change the psychology of the people who are making the decisions to deploy lethal force? And I think a lot of people at this point would have to answer yes”