paint-brush
Why Your AI Startup Should Hire a Head of AI Ethics on Day 1by@raphaelio
265 reads

Why Your AI Startup Should Hire a Head of AI Ethics on Day 1

by RaphaelApril 18th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Hiring our Head of AI Ethics early had an immediate impact for our startup, to build ethical foundations, accountability and protect our users privacy
featured image - Why Your AI Startup Should Hire a Head of AI Ethics on Day 1
Raphael HackerNoon profile picture


Six months ago, we launched our startup Spheria.ai, a platform where people can create and host the AI version of themselves. As founders and consumers, we knew that from day 1, we wanted to build a product that would let people claim back their personal data from the abuse of tech giant, and protect user privacy…


Today, I'm sharing our experience of having a Head of AI Ethics as the very first employee for your startup and how he turned our naive good intentions into actual science and a foundational framework so we can build a legitimate platform that people trust to create the AI version of themselves.


Realizing how little we knew about AI Ethics

As most founders, we are focused on delivering a great product and growing the user base while trying to stick to our moral compass.

At the very first meeting, Alejandro, our new Head of AI Ethics, brought to us a framework that would help organize the big questions around Privacy and Ethics. Instead of just creating an unorganized list of principles, he immediately cross-referenced ethical frameworks that he had seen across his research and deployed them in existing organizations.


Our Head of AI Ethics introduced us to a concept called Procedural Fairness, a framework used by the World Bank to make sure Fairness is at the center of its decisions and policies. So the biggest win (right after the second meeting!) was for us to graduate from a chaotic list of good intentions to using Ethics frameworks that are actually used by researchers in international organizations.


Principles and operational criteria of procedural fairness - a framework used by the World Bank that we adapted to set our AI Ethics foundations


Spheria's framework to procedural fairness in AI Ethics for creating and owning a personal ai



Right after that second meeting, we defined four pillars for Spheria to use as our foundation for AI Ethics: Transparency, Fairness and Accountability, and Privacy. Using this framework allowed us to visualize the relations between each pillar and see how one idea can have ripples and implications across multiple pillars.


The consequences were immediate: these pillars would immediately lead to asking ourselves the right questions and bring a new dimension of awareness:

  • How do we evaluate Fairness for our product?
  • How do we make sure all features we create are inclusive?
  • As a platform that creates new AI based on real ideas, are we accountable for the perpetuation of discrimination?
  • How are we transparent and accountable when making an arbitrary decision?



Sharing our experience hiring a Head of AI Ethics to help set moral safeguards


It's okay not to have all the answers, but we must ask the difficult questions.

During our first month, every meeting with our Head of AI Ethics felt like opening Pandora's box in a good way: a million questions arrived around freedom of speech, bias, inclusion, and censorship, and each one felt as legitimate and urgent to answer as the other.


“The goal..” Alejandro put it, was to “elevate ourselves to a higher level of confusion”. This mindset that it’s okay to work towards the right answers, as long as we do so in a transparent and inclusive way, would become the foundation of our ethical policy.


It became clear this would take time and a lot of consideration. So, we started an internal document to list every question that was brought up during meetings. We needed to keep track of all the ideas.


We would write the questions as they came, then spend one minute evaluating if the question was properly asked and what tension it created around which ethical concept, and then where and how that tension would exist in the product or how it's being used. Finally, we would evaluate if this question could be broken down into smaller bits to bring more granularity.


For example, the spontaneous question “What is our moderation policy?” needed to be broken down into sub-questions like:

  • “in what cases do we need a moderation policy?”

  • and “are there laws that want to prevent someone from adding into their own AI's knowledge sensitive or illegal content?”

  • all the way to: should we filter the input, ie, blocking the owner of an AI from adding illegal content into it? Or should we block the output, ie, the AI from sharing information about illegal content?”

  • to finally touch the point of tension: “how hard do we need to prevent the perpetuation of immoral content, that is (let's be real) available elsewhere on the internet?”


That list of open questions today is significant, but it's also great as it lays our foundations as a company, a moral entity, to give a real direction for the team to build a future we believe in.


Be accountable to our users - actions speak louder than words

Most companies and startup want users to trust them and will create a nice catch-phrase around “we love privacy”, “we are ethical” and get away with it.


After our launch, we saw that our privacy page was the most visited page after our landing page. Users are indeed creating their AI Double, often importing their data from other existing platforms like Instagram, LinkedIn, etc…


We knew we had to do more than just a privacy policy, but at the time, we didn't necessarily know how or what to do.

Having our Head of AI Ethics in our Team allowed us to act on this and to show our real and tangible work. We created our AI Ethics Hub to show our dedication and our efforts to be transparent and allow users to follow our progress.


Spheria created their AI Ethics and Privacy hub to share their research and safeguards.


By creating our AI Ethics Hub we feel we are doing right by our users, especially considering what we're building with Spheria, which is to let people create the AI version of themselves.


Our users don't think about all of this when they create their official AI double, but privacy, ethical rules, and transparency are so tightly imbricated into creating your digital second self that, as the makers and founders, it's our job to be transparent, protect privacy and provide ethical rules.


We hope this helps in highlighting the difference between startups that are actively engaged in Privacy and Ethics, and those that just put pretty words on their landing page.


Setting the foundations for our company's culture

Having our Head of AI Ethics join our team so early was the best possible catalyst to help build the right culture at Spheria. De-facto, it put the right values and principles we talked about above at the foundation of our startup, and these foundations will always be present to support the future we're building.


“Stay scrappy” is what an investor told me a few months ago. While scrappy does not mean unethical, having a full-time Head of AI Ethics makes us accountable to all our users and accountable to each team member who may speak up against lines being bent.


The goal for us is to avoid any future (embarrassing and possibly dishonest) situation similar to when OpenAI's CTO replied, “I don't know” to the journalist asking what data was used to train the new Sora video model.


So I'm happy to say that us being on this track definitely helps me sleep at night. It brings me some reassurance and a little boost of confidence to face the thousands of hurdles of growing a startup. It's also a strong signal for users, future hires, and investors to judge us in the light of our actions.