Released from the corners of the boardroom for decision support, AI has now become a team player in the modern digital enterprise. And for the right reasons - according to a McKinsey survey, 66% of AI adopters report an increase in their revenues.
Beyond money matters, AI can elevate the customer experience through various use cases.
However, if done irresponsibly, AI wonders can quickly turn into AI disasters - and history is our accomplice in this observation. From introducing gender bias in hiring algorithms and racial bias in facial recognition algorithms to charging higher interest rates for underprivileged communities in loan decisioning use cases, AI disasters are commonplace and cost businesses more than just a few hundred million dollars, according to Harvard Business Review.
Beyond the need for mitigating AI's downside impact on a business, building fair and responsible AI algorithms is as much a socio-ethical imperative for businesses that intend to stay in business.
Take a look at how bias creeps into AI models and a framework to address bias in your enterprise AI strategy.
Most enterprises have moved towards a standard framework for building business-driven use cases of AI. However, within the larger AIOps cycle, bias can creep into a model from any of the five stages that lead to the development of an AI model. Here are some of these sources:
Bias within datasets: AI models often make use of datasets that carry the footprints of human bias within them - consider a dataset of loan decisions, hiring decisions, or a list of buyers from a shopping mall.
Such datasets can inject bias into a model if their predictors are labeled erroneously, if certain types of communities are under-represented, or if the data sampling strategy was itself biased, to begin with.
Bias in model development: AI models are built through extensive processes, which include feature selection, model tuning, and dataset splitting. Sometimes, excluding some features or conducting train-test splits without adequate oversight can inject data into the model. For instance, training a model with data from the earlier quarter of the year for a variable that follows an annual trend can lead to a biased model.
Injecting bias during validation: Some algorithms are prone to overfitting, while others are to underfitting. Again, understanding the model's needs from its real-world application can help data scientists identify the right level of complexity that minimizes the total error.
Moreover, preventing leakage between test and training data can be another source of bias during the model testing and validation phase.
Acquiring interaction bias: NLP-powered bots that generate engagement and interaction within communities can pick biases from the humans they interact with. These algorithms can amplify the racial and gender biases that are to be found in the human voices on the platform.
Bias in purpose: Some models are biased in the very purpose they are built for. Consider the example of Google News, which returns similar stories when a user runs a search. Such programs can land the user in an information bubble built by the intent and parameters of their search query.
Recognizing the source of biases is the first step to building fair and bias-free AI models - followed by which enterprises should consider the following framework for mitigating bias from their AI strategy.
Today, technical communities are making considerable efforts to mitigate bias in AI models. But three key levers are crucial to pay attention to. These include:
Look at these in detail, and consider checking out the hyperlinked resources for getting started.
The algorithm development process is prone to absorbing biased influences at multiple points. Here is a checklist for promoting enterprise-wide responsible algorithm development:
Lastly, promote explainability and run black-box auditing where design decisions and data models have been internalized into the algorithm. The above steps can help bring the risk zones into sight and mitigate bias-associated risks during algorithm development early on.
Data is the most significant source of bias in AI model building. Therefore, focus on data collection and modeling processes for eliminating bias, mitigating risk, and maximizing the business value of an AI model from the get-go. Consider the following steps for curating bias-free datasets across the enterprise:
Lastly, principles and processes to ensure responsible use of data should be internalized within the organization's data strategy. This can be achieved by enforcing technological guardrails that cannot be overridden by your users, and using data from a variety of vetted sources.
CSR teams often help an organization move the needle of equity across the larger fabric of society. Therefore, CSR teams can play a key role in advancing an organization’s responsible AI roadmap. Here is how to achieve this alignment:
The idea of bias and fairness is incomplete and therefore, evolving. Before embarking on the journey towards fair AI development, enterprises must acknowledge that their conceptions of fairness (and consequently bias) will be incomplete, too.
But that is not a reason to sideline the question altogether.
The promises of AI are abundant for businesses - that is, only if the technology is used in a responsible and socially and ethically conscious manner. By following the framework outlined above and leveraging responsible AI tools from hyper-scale technology providers, enterprises will be well-positioned to achieve holistically desirable outputs from their AI strategy.