The hidden risk of ethics regulation

Written by schaun.wheeler | Published 2018/04/11
Tech Story Tags: ethics | risk-of-ethics-regulation | data | ethics-regulation | regulation

TLDR Regulating the tech industry won’t fix its ethical problems, it might make them worse. Mike Monteiro has written the most compelling argument I have seen for regulation. Regulation would address many of the kinds of ethical risks that have made headlines recently. But I think it would leave many risks in place and introduce new risks — a more systemic risk, in fact — that in the long term would actually expose the public and the industry to more potential downside that it currently faces. Regulation at scale requires rules that stipulate what is ethical and what is not, in the case of the discussion of the ethics.via the TL;DR App

Regulating the tech industry won’t fix its ethical problems. It might make them worse.

In a previous post, I argued that there were three ways of dealing with ethical problems:

  1. Build and maintain a legal/regulatory infrastructure that supports ethical behavior and discovers and punishes unethical behavior.
  2. Get a public assessment of ethical trustworthiness by requiring individual practitioners to prove skin in the game.
  3. Just take people’s word for it when they say they will behave ethically.

Option 3 is obviously flawed, but nevertheless seems to be the most popular way of tackling the tech industry’s ethical problems at the moment, usually manifesting itself as a code of ethics. I personally think option 2 is the most realistic and effective way to go. But, even though it’s involved and probably expensive to implement, there is definitely a case to be made for option 1. I think Mike Monteiro has written the most compelling argument I have seen for regulation. He writes specifically about design ethics, but his points apply equally well to data science ethics and even tech ethics in general.

I’m going to spend most of this post explaining what I see as a core difficulty with the regulatory approach to ethics, so I want to first try to do justice to the argument in favor of it. I’m trying to summarize Monteiro’s position here — if there are flaws in that summary they probably come from me, not him, so I strongly recommend reading his original post. (I recommend reading his post in any case — it’s really excellent). As I read it, here are Monteiro’s main premises:

  1. Ethical principles can only be upheld by individuals: an individual person has to be willing and able to stand up and say “no, I won’t do that.”
  2. Individual people often don’t feel willing or able to uphold ethical principles, because the person telling them to ignore those principles is their manager, or their boss, or someone else in a position of power. That power asymmetry prevents individual people from doing the right thing.
  3. The only way to get individual people to be reliably willing and able to behave ethically is to level out the power asymmetry, or even better, tip the asymmetry in the other direction, so people feel unwilling or unable to behave unethically.

Based on those premises, Monteiro argues that licensing and regulation are the way forward: rebalance power by creating a body of oversight, an organization that has resources that can mitigate unethical intent. It costs an employer or customer practically nothing to ask a practitioner to behave unethically (except the possible cost of finding a new practitioner if the current one proves intransigent). A professional organization, accrediting body, or union that fights for the individual practitioner raises that cost, gives individuals more freedom to make ethical choices, and therefore reduces unethical behavior.

I’m not sure I agree with any of Monteiro’s premises except the first, as I think most bad ethical decisions are made because of the honest difficulty with which people are able to foresee the downstream consequences of their decisions. However, even if I were to accept all of the premises as true, I’m not convinced that the conclusion follows from the premises. I think regulation would address many of the kinds of ethical risks that have made headlines recently, but I think it would leave many risks in place and at the same introduce new risks — a more systemic risk, in fact — that in the long term would actually expose the public and the industry to more potential downside that it currently faces.

Pasquale Cirillo has laid out a analogy for the risks of regulation. His work focuses on financial regulation, but the principle is fully generalizable. He calls it the fence paradox:

It’s probably better to view the fence paradox as a potential attribute of any regulation than as a definite attribute of all regulations. In other words, regulations can work well, as long as they don’t create the artificial security — and therefore the hidden systemic risk — of a fence.

It’s very natural to think that the answer to ethical problems is regulation. But that very regulation can make us feel like we have ethical protections in place when in fact we don’t. Regulation at scale requires rules that stipulate, in the case of the present discussion, what is ethical and what is not. Those rules have to be much more specific than what you see in any of the codes of ethics currently floating around: they need to be broken down into what, essentially, is a structured interview schedule, so any two arbitrary auditors trained to use the schedule could investigate an arbitrary company, team, or practitioner and answer the questions in similar ways.

If we can’t approximate consistency in the application of the ethical rules, then we have no basis for regulation. If, however, we can approximate consistency, the companies, teams, and individual practitioners are fully capable of abusing the system, behaving as unethically as possible while still keeping with the strict technical definition of the rules. That’s the risk of the fence: because the fence is there, they stack up all of their activities right next to it, push on it, bend it if possible. Because regulations are designed to only alert us when the fence has been breached, this doesn’t set off any alarms. When someone, either through malign intent or incompetence, does breach the fence, the results are catastrophic to all those people who had used the fence as a retaining wall, and to all the other people who had come to rely on the fence-leaners.

So how do we prevent fence-leaning? One way is to build fences that try to cover less ground. A great illustration of a small fence appeared recently in the Washington Post, as a restaurant owner described how she instituted rules for managing sexual harassment in her workplace.

We decided on a color-coded system in which different types of customer behavior are categorized as yellow, orange or red. Yellow refers to a creepy vibe or unsavory look. Orange means comments with sexual undertones, such as certain compliments on a worker’s appearance. Red signals overtly sexual comments or touching, or repeated incidents in the orange category after being told the comments were unwelcome.
When a staff member has a harassment problem, they report the color — “I have an orange at table five” — and the manager is required to take a specific action. If red is reported, the customer is ejected from the restaurant. Orange means the manager takes over the table. With a yellow, the manager must take over the table if the staff member chooses. In all cases, the manager’s response is automatic, no questions asked….
In the years since implementation, customer harassment has ceased to be a problem. Reds are nearly nonexistent, as most sketchy customers seem to be derailed at yellow or orange. We found that most customers test the waters before escalating and that women have a canny sixth sense for unwanted attention. When reds do occur, our employees are empowered to act decisively.
The color system is elegant because it prevents women from having to relive damaging stories and relieves managers of having to make difficult judgment calls about situations that might not seem threatening based on their own experiences. The system acknowledges the differences in the ways men and women experience the world, while creating a safe workplace.

These rules aren’t nearly explicit enough or standardized enough to be enforced consistently across a large number of restaurants. That’s the point: local solutions don’t require us to standardize definitions of right and wrong, because they don’t need to scale. We can actually just work on the basis of people’s “vibes” because the fixes are also local — they can’t as easily get out of hand, and they can be negotiated amongst a small group if and when they do. As long as the system is localized, it’s less important what system we use, and more important that we have a system in the first place.

It’s easy to imagine the way this kind of system could be used to regulate a technical team. Any team member can flag a project decision. Yellow means the manager gets involved to lead a re-evaluation of the decision and moves the employee to a different project (and one not obviously worse in prestige/pay/scope, etc.) at that employee’s request. Orange means the manager gets involved and the employee gets to move to a new project. Red means the project gets put on hold while a full-scale re-evaluation takes places. Second-guessing the employees call is not an option until after the corrective action has already taken place. It’s not a perfect system, but it’s implementable in a way that doesn’t create fence risk, and it shifts the cost of ethical risk taking to the employer without the need of a large-scale regulatory body.

The natural rejoinder to this argument it’s unrealistic to expect most companies to be willing to implement and stand by these kinds of policies without some sort of coercion from an outside body. That’s a fair point, but I think the coercion can come from the inside if properly managed and networked. Imagine a professional network or service that helps individual practitioners move jobs: companies that prioritize ethical practice can be alerted to a job seeker who wants to leave his or her position because of ethical concerns, and companies that lose employees because of ethical concerns can be alerted to that fact after the employee has obtained a new position. And, perhaps, it may be a good idea to advertise companies that have lost employees for ethical reasons. Kind of like a Better Business Bureau.

The details, of course, need to be worked out much more fully, but I believe those details are more worth working out that are the details of a large-scale regulatory framework. If we can create a system to drain unethical companies of talent by making it easier for that talent to leave, those companies will have an incentive to implement local ethical regulations. Regulations don’t need to be scalable in order to be effective is the cost for individual practitioners to leave unregulated situations is sufficiently low.

Large-scale regulation is dangerous. That doesn’t mean we shouldn’t do it — a great many dangerous things are very much worth doing. However, if we undertake to regulate, we should do so very slowly and cautiously. That is especially true when the the target of our regulations is something as complex and poorly understood as the tech industry. Localized regulation has the ability to achieve the same ends without incurring the same risks, and it gives us a field in which many different teams and organizations can tinker with the rules, which increases the changes of eventually, perhaps, creating a set of rules for large-scale regulation that don’t also carry systemic risk.


Written by schaun.wheeler | Anthropologist + Data Scientist. Co-founder at Aampe.
Published by HackerNoon on 2018/04/11