When talking about Responsible AI and Ethical AI, there still seems to be confusion. Are they just the same concept with different names? Let’s blow some minds – Ethical AI actually falls under the Responsible AI (RAI) umbrella. It serves as a moral code for governments, corporations, organizations, and ML practitioners to prevent “Skynet” from becoming reality.
While the “Terminator” scenario is a bit unbelievable – building AI ethically, and deploying it for good is crucial for mass adoption – but, is still second thought to ensuring it first works as we want it to.
When the performance of a model degrades or drift happens, ML engineers, data science teams, and AI stakeholders need to know exactly what to do – before it impacts business continuity or society at large. Using AI to take on a noble goal like fighting climate change or protecting wildlife is definitely a good thing, but if your AI framework is not defined, decision trees for alert response don’t exist, and you have minimal visibility into your model performance, then you’re not really positioned to solve any of these noble issues.
Over the past decade, we’ve seen major organizations – such as
What are our limitations? How do different businesses and use cases adopt these guidelines within their own products, frameworks, and teams?
Responsible AI is a framework of practices and tools that ensures machine learning models are working as intended, and when they’re not they lay down the law of what to do next – how do I minimize unintended consequences in my AI before they harm my business or my users? Only with an RAI framework set in place can organizations aspire to act upon their ethical AI goals and avoid unintentionally biased outcomes. If a model is experiencing drift, and subsequently underperforming, no amount of ethics or good intentions will alert you to the exact point that’s causing the issues.
In order to trust your AI and ensure its performance is driving ethical value, you can easily implement these 4 quick wins and start practicing RAI:
Model Visibility
One place where all relevant stakeholders can see and understand the status of all production models.
From Reactive to Proactive Response
Its black-box nature and numerous use cases make it impossible to define your AI’s unexpected behavior in advance. Monitoring and alerting will help you act quickly. Respond now, remediation will come later.
1. Set monitors and alerts for any deviations outside the defined standards.
2. Define your model’s expected behavior: input data distributions, data science and business KPIs, expected prediction distribution, and activity volume.
3. Ensure your main communication channels are integrated with your alerting mechanism. This way all relevant stakeholders have easy access and can encourage healthy discussion on optimizing response.
Incident-Response Workflow
By responding swiftly and efficiently you can significantly mitigate the impact of an ML event on your business and users.
Performance Review
If no one reviews model performance, how do we know it’s meeting its goals?
There is a vast amount of private and public entities leveraging AI to make our world a better place. Whether it’s
The 4 Quick Wins were derived from Liran Hason’s