paint-brush
How to Embed AI Ethics into your Work Cultureby@modzy

How to Embed AI Ethics into your Work Culture

by ModzyJune 21st, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As AI proliferates, it’s time to figure out how to operationalize AI ethics. It's time to turn ideals into actions, and prime your culture and development process for success. Accenture announced an AI governance guidebook with recommendations on how to embed governance throughout development teams, including the selection of “fire wardens” with the responsibility to escalate issues when they arise. In many cases, the “black box” of how AI reaches or reaches decisions or recommendations is important.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Embed AI Ethics into your Work Culture
Modzy HackerNoon profile picture

AI ethics has long been a hot-button issue. For some, it’s reduced to a debate about whether AI should be making decisions traditionally reserved for humans. There are misconceptions that AI systems will quickly evolve into superhuman intelligence, or that they’ll be a silver bullet to solve problems for which we as a society don’t yet have answers.

For others, it’s figuring out how to operationalize ethics beyond simply articulating a set of ethics principles. – There’s also too much talk about ethics after negative AI incidents are exposed.

As a community, we need to do better. As AI proliferates, it’s time to figure out how to operationalize AI ethics. It’s time to turn ideals into actions, and prime your culture and development process for success. Too much is at stake for anything less.

As other countries and big tech companies rush to win the AI race, there are global conversations underway to advocate for AI solutions that safeguard human rights, minimize the impact of unintended bias, and advance the social good. All recognize (though not all will heed) the vital need to establish clear standards for trustworthy AI, and uphold them.

Embedding AI Ethics into your Culture

The first step is to define your ethical principles. Numerous policy and research organizations have already done the hard work of analyzing and documenting the different standards and ethics documents around the world, which can serve as a good starting point. [i] Once you’ve established your AI ethics principles, what’s next?

It’s time to align your culture and institutional processes with your ethical principles. Operationalizing ethical AI requires a shift at all levels within an organization to embed and execute against a fundamentally new way of thinking. When your ethical principles are championed by leadership, embodied throughout staff, and in lockstep with your overarching business goals, it’s much easier for your teams to follow suit and make sure the technology measures up.

Leaders should take the conversation forward. Have hard discussions about what applications of your technology meet your principles, and what violates them. These types of difficult conversations ensure everyone is on the same page to take a stance against violations of the principles. This is your chance to create channels to escalate risks and build paths for recourse when mishaps do happen. Most importantly, it’s about incentivizing the people on the ground developing the AI to realize the value of this way of building AI, and empowering them to speak up when something isn’t working. When leaders take responsibility for how the ethics and AI message is spread and embraced throughout a culture, they help ensure the systems developed are accountable.

Overseeing the build of AI systems to align with your ethical principles is no small feat. Beyond open discourse among stakeholders, organizations need to institute training programs. Managers and developers must know how to identify and flag risks in systems so that they can be appropriately documented and monitored.

Even the most careful organization will have instances where an AI-enabled system runs amiss or doesn’t perform as expected. To address this challenge, last year Accenture announced an AI governance guidebook with recommendations on how to embed governance throughout development teams, including the selection of “fire wardens” with the responsibility to escalate issues when they arise.[ii]

By creating mechanisms to document and share lessons learned internally about how and why these incidents happen, you will spur the evolution of an ethics-driven culture, and make these mishaps far fewer in between.

Data scientists training models and machine learning engineers building AI-powered applications shoulder the burden of putting these principles into practice. So how do you help them? Build in regular checkpoints and processes for adhering to ethical principles throughout the development process. If the first time your data scientists and developers are thinking about whether a system is transparent or accountable is after it’s deployed, it’s too late.  Work in tandem to establish checkpoints throughout model design and application development.

By looking at discrete opportunities to both educate stakeholders and build in frequent checkpoints throughout development, you’ll create a culture and mechanisms to embed your ethical principles and meet the standards you’ve established.

AI System Design

Much of the work to operationalize AI ethics should occur during AI model design and system development. This requires a certain level of effort outside the normal wheelhouse of a data scientist or developer—that’s why it is so important to incentivize and empower these teams through changes to your organization’s culture. There are also steps your teams can take throughout development to ensure your systems are transparent, accountable, governable, and robust.

For many organizations, ethics principles point to the ultimate goal of building trustworthy AI, which requires some degree of transparency into how AI reaches decisions or recommendations. In many cases today, the “black box” excuse no longer cuts it.

Stakeholders need to be able to check how or why a model reached a decision. Explainability is one tactic to get at the issue of transparency. You can “explain” how a model reaches decisions or translate the process that AI uses to transform data into insight.

Some naysayers immediately ruffle at the mention of this approach because it’s not a panacea—not all AI lends itself to explainability, nor should it be explainable. There are tradeoffs to be made between explainability and system accuracy.

DARPA, the Department of Defense’s research agency that brought us the internet, has maintained an active research program focused on AI for the past few years. Part of building transparency into AI is consciously making a choice about what level of explainability is acceptable for a specific application, and documenting and owning that.

As mentioned, a key step is developing checklists to aid data scientists and developers in thinking about “ethics” questions in the course of their development activities. While many organizations already have processes and checklists in place for their normal software development activities, AI model design and training, or evaluating models’ adherence to ethical principles, remain more ad hoc activities.

Research earlier this year published by Microsoft’s Aether Working Group on Bias and Fairness showed that co-developing checklists with data scientists and machine learning engineers to evaluate an AI model or AI-enabled system’s adherence to ethical principles is effective.

Something as simple and clear as the checklist provided in their research paper can help formalize and institutionalize the operationalization of ethics principles.[iii]

If you build transparency into models from the onset and establish formal checkpoints, you’ll be better able to evaluate ethics concerns throughout the development process. Once a system is deployed, auditability is crucial to operationalizing ethics.

At the Fairness, Accountability and Transparency conference this year, researchers from Google and the Partnership on AI published a paper detailing a framework for auditing AI systems prior to deployment.[iv] As organizations deploy AI at an enterprise scale, the potential for risk grows—you need to be able to look under the hood and identify who or what caused an incident. Auditable AI during the design process can be complementary to explainable AI, or in exchange for it when it’s not possible.

It’s up to you to determine how you can achieve this with your software development teams—through building an audit function that logs activity assigned to individual users’ API keys, audit teaming exercises, third-party audits, or bug bounties.[v] There are many options to leverage to proactively mitigate potential ethics risks down the line.

Operationalizing AI ethics must be more than a pipedream, more than a conversation. It’s time for organizations developing AI to ask, and answer, essential questions about ethical principles and processes. When you institute changes within your culture, you’re enabling the shift to an ethics-driven approach.

For ultimate payoff, make the investment now in reframing the AI system design process to incentivize data scientists and developers to build systems built to your ethical principles. It’s time to move beyond ideals to action.