paint-brush
Before Thinking About Ethical AI, First, Become Responsibleby@noaazaria
669 reads
669 reads

Before Thinking About Ethical AI, First, Become Responsible

by Noa AzariaDecember 19th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Using AI to take on a noble goal like fighting climate change or protecting wildlife is definitely a good thing, but if your AI framework is not defined, decision trees for alert response don’t exist, and you have minimal visibility into your model performance, then you’re not really positioned to solve any of these noble issues.
featured image - Before Thinking About Ethical AI, First, Become Responsible
Noa Azaria HackerNoon profile picture


When talking about Responsible AI and Ethical AI, there still seems to be confusion. Are they just the same concept with different names? Let’s blow some minds – Ethical AI actually falls under the Responsible AI (RAI) umbrella. It serves as a moral code for governments, corporations, organizations, and ML practitioners to prevent “Skynet” from becoming reality.


While the “Terminator” scenario is a bit unbelievable – building AI ethically, and deploying it for good is crucial for mass adoption – but, is still second thought to ensuring it first works as we want it to.


When the performance of a model degrades or drift happens, ML engineers, data science teams, and AI stakeholders need to know exactly what to do – before it impacts business continuity or society at large. Using AI to take on a noble goal like fighting climate change or protecting wildlife is definitely a good thing, but if your AI framework is not defined, decision trees for alert response don’t exist, and you have minimal visibility into your model performance, then you’re not really positioned to solve any of these noble issues.


Governments, Cities, & Major Corporations are Joining the Ethical AI & RAI Discussion


Over the past decade, we’ve seen major organizations – such as Microsoft, Amazon, Google, and others – attempt to tackle the ethical and responsible theory of AI. The EU has created ethics guidelines for trustworthy AI, as a way to certify the responsible use of AI. We’ve also recently witnessed the White House introduce a blueprint for an AI Bill of Rights. Even the city of New York has started getting ethical with AI. All of these examples highlight the progress that has been made to find a global consensus on how to build and deploy AI.


What are our limitations? How do different businesses and use cases adopt these guidelines within their own products, frameworks, and teams?


Why Responsible AI Comes First

Responsible AI is a framework of practices and tools that ensures machine learning models are working as intended, and when they’re not they lay down the law of what to do next –  how do I minimize unintended consequences in my AI before they harm my business or my users? Only with an RAI framework set in place can organizations aspire to act upon their ethical AI goals and avoid unintentionally biased outcomes. If a model is experiencing drift, and subsequently underperforming, no amount of ethics or good intentions will alert you to the exact point that’s causing the issues.


4 Quick Wins to Practice Responsible AI

In order to trust your AI and ensure its performance is driving ethical value, you can easily implement these 4 quick wins and start practicing RAI:


Model Visibility


One place where all relevant stakeholders can see and understand the status of all production models.


  1. Get a centralized dashboard for all relevant stakeholders. A great way to see the status of all production models, and an easy visualization for the bosses when they start asking questions 😉
  2. Track all models in production! Include training data, model artifacts, and metrics from different data slices.
  3. Store model inputs and outputs, and any inference data in your data lake – The more information you have on your models the easier it is to navigate through any issue and improve performance.



From Reactive to Proactive Response


Its black-box nature and numerous use cases make it impossible to define your AI’s unexpected behavior in advance. Monitoring and alerting will help you act quickly. Respond now, remediation will come later.


1. Set monitors and alerts for any deviations outside the defined standards.

2. Define your model’s expected behavior: input data distributions, data science and business KPIs, expected prediction distribution, and activity volume.

3. Ensure your main communication channels are integrated with your alerting mechanism. This way all relevant stakeholders have easy access and can encourage healthy discussion on optimizing response.



Incident-Response Workflow


By responding swiftly and efficiently you can significantly mitigate the impact of an ML event on your business and users.


  1. Determine urgency, but be objective.
  2. Define a workflow for handling ML events – Create a decision tree and question flow, so that every stakeholder knows their role – even if things go down at 2:00 AM.
  3. Determine which fallback mechanisms are activated for different scenarios.
  4. Learn. Learn. Learn. Always log and summarize the incident and learn how to improve for next time.



Performance Review


If no one reviews model performance, how do we know it’s meeting its goals?

  1. Schedule a weekly/bi-weekly meeting with all relevant stakeholders to review and evaluate production performance.
  2. Go over the previous meeting’s action items, review previous performance vs. KPIs, list new questions, and set goals and owners.
  3. Write an agenda, and include:
    • Review action items from the last meeting.
    • Compare expected KPIs against actual performance metrics.
    • Define workflow and ownership from unexpected performance gaps.


Closing Thoughts

There is a vast amount of private and public entities leveraging AI to make our world a better place. Whether it’s saving the whales or ensuring Twitter has fewer fake news-spreading bots (looking at you Elon), ethical AI should always be at the top of the priority list. Getting there, though, first requires organizations to adopt an efficient RAI framework to actually start pursuing their ethical goals. While many are burnt out from discussions around responsible AI, the 4 quick wins above lay down a simple checklist of how to put it into practice.



The 4 Quick Wins were derived from Liran Hason’s view on how to start practicing Responsible AI.