paint-brush
The Technocratic Oathby@elliot_f
190 reads

The Technocratic Oath

by Elliot ForbesMarch 24th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

I’ve not been too active on this <a href="https://hackernoon.com/tagged/platform" target="_blank">platform</a> in the past few months due to other projects going on, but this is something I’ve been wanting to write about for a long time now. A while back, I read an article that went viral on the likes of <a href="https://hackernoon.com/tagged/hackernews" target="_blank">HackerNews</a> and Reddit about a programmer who was ashamed of something he had written.
featured image - The Technocratic Oath
Elliot Forbes HackerNoon profile picture

I’ve not been too active on this platform in the past few months due to other projects going on, but this is something I’ve been wanting to write about for a long time now. A while back, I read an article that went viral on the likes of HackerNews and Reddit about a programmer who was ashamed of something he had written.

The article in question can be found here:


The code I’m still ashamed of_If you write code for a living, there’s a chance that at some point in your career, someone will ask you to code…_medium.freecodecamp.org

Ultimately, the programmer in the above story was asked to create a general information site for teenage girls that recommended drug X and only drug X. The programmer apparently did a great job and was well commended for his efforts. However, it turns out the main side effects of this drug was depression and suicidal thoughts…

The day he was meant to be getting rewarded with an expensive steak dinner was the day he found out that a young girl had killed herself after taking the drug.

Now, this is most certainly not the only story I’ve seen recently that has come about due to questionable actions, implemented by programmers and data scientists alike.

We’ve seen the likes of #deletefacebook take off due to Cambridge Analytica’s manipulation of millions of people using data that they acquired from Facebook.

The scariest part about this is that it will, without doubt, happen more and more in coming months and years. We have no protection when it comes to saying no to requirements in our projects that could be considered unethical and/or immoral. By saying no, we as developers would most likely be putting our jobs at risk with no real repercussions for the people who asked us to implement these requirements.

As more and more companies start harvesting user data on us, the likelihood of something like Cambridge Analytica happening again becomes higher and higher.

This twitter chain is utterly terrifying…

The Hippocratic Oath.

If you are familiar with any medical profession then you should be aware of the Hippocratic Oath. This Oath requires new medical professionals to uphold specific ethical standards.


Hippocratic Oath - Wikipedia_The Hippocratic Oath is an oath historically taken by physicians. It is one of the most widely known of Greek medical…_en.wikipedia.org

Now this is more symbolic as opposed to a strict set of rules, however, it does ensure that the newly graduated professional contemplates his/her actions and ensures they are doing the best for the patients that they are treating. But would a “technocratic oath” be something that is worthwhile contemplating for all software developers going into the field?

Whilst it won’t actually stop malpractice, it will encourage newly graduated developers and practicioners of the craft of programming to hold themselves to a certain standard of ethics when developing their systems.

Something like this represents a small step in the right direction until proper checks and balances can be put in place to ensure businesses are legally obligated to follow a stricter set of ethical guidelines.

The Challenges of Deciding Immorality

How do we, as developers, understand the consequences of what we build? For instance, how could the developers of say, Webpack, know what their module bundler would be used for? Are they aiding immoral companies and regimes by continuing to build excellent tools?

Most ethical morality falls into a “gray area” as Chidi describes it. If you’ve watched The Good Place, you’ll have seen Chidi have to contemplate the “trolley thought experiment” in which he must decide whether to kill one person to save five. This is an experiment that will time-and-time-again rear it’s head as we come closer to perfecting the art of self-driving cars.

Chidi knows the struggle.

The more you drill down into the experiment, the more questions it raises. By saving the lives of 5 people and killing 1, are we doing the best thing? What happens if those 5 people happen to be mass murderers or sex offenders and that 1 person happens to have the cure to cancer locked up in her/his brain?

How do we then go about ensuring that whatever we build/design follows the best moral practices if these practices are not only questionable, but the uses for our systems are incomprehensible?

Conclusion

I’d be very interested to hear the thoughts of everyone else on this subject and discuss potential ways we could address these scenarios. Feel free to leave a comment in the comments section below or tweet me: Elliot Forbes

This idea of a “Technocratic Oath” certainly isn’t novel and I’ve already seen efforts to modify the Hippocratic Oath to suit: https://gist.github.com/laurenancona/55c5eeac8ddfb33eb7b4cc20175217a9