Who Takes Responsability for Facebook’s Ethical Decisions?

Written by karimcmahon | Published 2019/02/01
Tech Story Tags: facebook | social-media | ethics | facebook-ethics | ux-research

TLDRvia the TL;DR App

Who is responsible for monitoring technology companies ethical decisions?

Ethics — “The discipline dealing with what is good and bad and with moral duty and obligation” — Merriam Webster

Ethics are an integral component in day to day decision making. We often associate our own ethical decisions as our conscience or moral obligation. I didn’t really consider the wider concept of ethics until I started studying Applied Computing at university and needed to conduct user research. Many coursework projects leveraged user research techniques like focus groups, usability testing and interview sessions. The results from user research were vital in providing insights into the impact of the software development projects I was working on.

One part of the user research process I vividly remember is applying to the academic ethics committee anytime I needed to leverage user research. The ethics application had to be detailed and transparent outlining every step involved in the process from how I would contact participants, documentation I was providing to them, incentives they may receive etc. If anything in the application appeared to be unethical or simply unclear, a redraft of the ethics proposal would be required. When I heard the news this week regarding Facebook’s Research app one of my first thoughts was where was the ethics committee?

Last week TechCrunch brought to light that Facebook was asking users to download a Research application via a VPN service. When user’s agreed to the app’s terms and conditions, they would receive monthly compensation in the form of a $20 giftcard and in return the app would be able to capture a significant amount of the user’s activity. Activity like the messages users have sent, the apps used and the webpages browsed. Facebook leveraged their developer license and a VPN to enable them to ask users to directly install the app and bypass Apple’s App Store, but why would Facebook want to bypass Apple’s process?

https://www.pexels.com/photo/office-working-app-computer-97077/

In June Apple banned developers from enabling apps to collect user data that is not relevant to the core functionalities of their app e.g. information on other apps or services users were using. However this ban did not phase Facebook who continued their data collection efforts through the Facebook Research application. The group of users targeted were from the ages of 13–35 however those aged 13–17 also required parental permission. Facebook used the services Applause and BetaBound to sign users up for the app, this helped hide Facebook’s involvement.

Applause did make clear in the terms and conditions the type of data being tracked by the app whereas BetaBound simply stated the downloaded app would run in the background. Facebook clearly knew they were pursing data collection in a manner that could be viewed as unethical. They would not have bypassed the app store or attempted to hide their involvement if it was an ethical process being followed. Facebook only targeted a set number of individuals and did not publicly advertise the initiative meaning they could avoid scrutiny. This also means vulnerable individuals may have given a significant amount of their data away without realising what they were committing too.

After reading the TechCrunch story I looked into the use of ethics committees in the business environment. I discovered it is not common practice for corporate companies to have an ethics panel or any form of ethical approval. I suppose I was naive to expect this. I currently create internal applications for a small group of users at work, we have never needed to receive ethical approval to do usability testing however I always believed this was because the internal users would be familiar with the tools being tested and therefore could easily identify when boundaries were being crossed. I expected it to be different in global technology companies that have dedicated user research departments doing research with implications that could potentially impact millions of users. I expected if I had to go through a tedious and thorough ethical approval process to test a university project on 5 members of the public, a similar process if not even more robust process should be required for major technology companies.

https://www.pexels.com/photo/light-light-bulb-bulb-heat-40889/

Facebook are a leader in user research. At times I aspired to work in Facebook’s user research department. I enjoyed conducting user research at university and analysing the results, in-fact the UX and HCI modules had the most significant impact on me during my degree. I liked how research could be used to develop products that would positively benefit society especially for those with accessibility issues who are often overlooked. I have seen many corporate companies neglect accurately implementing user research into their technology lifecycle. They often add it as an afterthought and that’s if they have time. However Facebook truly followed a user centred design approach and were at the forefront of leveraging user research and HCI techniques from academia.

Facebook seemed to really want to understand their users and opened their doors to user experience researchers. In fact they poached many top researchers from academia. However with the information that has come to light regarding the Facebook Research App, you wonder why researchers from Facebook’s research department didn’t raise concerns sooner? Surely they were familiar with ethics committees from their time in academia; Surely they could see they were crossing a line. Did Facebook’s management team push back against researchers concerns or was it simply a case of following the crowd and believing that the company’s power mean’t they could overlook their ethical commitments?

https://www.pexels.com/photo/agree-agreement-ankreuzen-arrangement-210585/

The User Research app targeted individuals from the ages of 13–35. The service targeted children, who are considered a vulnerable population in research studies. Ethical approval for studies involving a vulnerable population are scrutinised heavily by academic ethics committees. The data that is captured from these groups could be extremely sensitive and must be handled with the correct care and due diligence. Users were presented with a straightforward sign up process in which they could provide access to their entire digital footprint in return for an incentive. The simplicity of the process combined with the incentive means the process could potentially prey on vulnerable people in society who are looking to gain access to money fast. I find hard to believe no one in the research community at Facebook recognised the potential dangerous implications of the research app on vulnerable populations within society.

At university, ethics were huge part of the curriculum, we studied the damaging long term implications of unethical studies on participants like in the Stanford Prison and Milgram experiments. This is not to say we should draw comparisons between Facebook and these studies, they are extremely different. However the lessons from analysing these studies at university have stayed with me; Ethics modules demonstrate the importance of speaking up about unethical processes before they spiral out of control. I would have expected with Facebook’s hiring of top researchers from academia, more employees would have spoken out against this project. If we can learn one thing from the TechCrunch story is the importance of continuing to have ethics modules in computer science curriculums, so students are aware of ethical boundaries being crossed in software and/or hardware projects they end up contributing to.

This isn’t the first time Facebook breached ethical standards. In an article about ethics committees the Interaction Design Foundation mentioned that in 2014 Facebook manipulated 689,000 users newsfeed, this lead to significant mainstream criticism with claims that Facebook were treating users like lab rats. In the same year Facebook acquired Onavo, a surveillance app which worked in a similar manner to the Facebook Research app. The app was accessible on the Apple app store until June when Apple’s policies changed and it got removed. Onavo was vital in the Facebook’s acquisition of whatsapp as Facebook was able to gather the information that users were using whatsapp more frequently than messenger. Both cases show that although there was widespread outrage in relation to these research initiatives there was no real implications for Facebook and they continued to pursue similar initiatives.

Facebook continually upset and outrage users brushing off the outrage with lazy apologies; Facebook will continue to grow at all costs with the support of shareholders whose eyes light up with dollar signs when they hear statistics like Facebook’s 1.5bn daily active user base even if that significant figure is littered with a series of data and privacy related scandals.

https://www.pexels.com/photo/abstract-art-blur-bright-373543/

Is it time we have external panels or audits that oversee the research being done by technology companies? Panels consisting of experts from academia, government and the technology sector. Panels that have the potential to enforce regulatory repercussions if ethical requirements are not met. Is this even possible to achieve?

Coming to terms with regulation may be difficult for the technology sector but it may be for the best. What would have happened if Apple hadn't held such high standards on their App Store? How many people would have skimmed terms and conditions and given away their entire digital footprint for a small fee? You may claim it is their own fault but is it even ethical to be requiring that much data from individuals?

Yes we as individuals have a responsibility to know what we are signing up for but technology companies must also take some responsibility. Facebook is a brand name that many vulnerable populations trust to be doing the right thing and to manipulate this trust is unethical. If technology companies prove consistently that they cannot act in an ethical manner, a form of regulation must be considered.

I will end this article with a reminder of the APA’s 5 general ethical principles:

  • Principle A — Beneficence & Nonmaleficence
  • Principle B — Fidelity & Responsability
  • Principle C — Integrity
  • Principle D — Justice
  • Principle E — Respect for People’s Right & Dignity

Published by HackerNoon on 2019/02/01