This article is co-authored by Alex Stern & Eugene Sidorin.
In the last couple of years, concerns around biases introduced by AI systems have grown significantly. In most cases, the issue stems from the fact that a large number of real-world applications rely on labeled data for training, and that, by definition, means that such models are only as good as the data they are initially trained on.
Therefore, if the underlying dataset has flaws (such as focusing on one characteristic at the expense of others), chances are the neural network would pick up those biases and further amplify them.
In some cases, the implications can be profound. For instance, the study done by Joy Buolamwini, a researcher at the M.I.T. Media Lab, demonstrated that the leading face recognition systems from Microsoft, IBM and Megvii misclassified gender of only 1% of white males, but made mistakes in up to 35% of darker-skinned females.
The models used to power facial recognition system were trained on a biased dataset that contained a larger proportion of white males’ photos and thus got progressively better at correctly recognizing their gender. Considering that face recognition tech is now being increasingly used by law enforcement, and the fact that African Americans have the highest chance to be singled out because they are disproportionately represented in mug-shot databases, such discrepancies in performance could have a very significant negative impact.
AI systems’ performance is often measured against human-level performance: the reason for this that for many tasks, humans do a great job that scores not very far from the lowest possible error rate (called Bayes error rate).
For instance, humans are generally quite good at face recognition, and while in the last few years, deep learning neural nets have largely caught up with humans in terms of overall quality of recognition, humans remain much less affected by the issues related to biased training data sets - in other words, unlike neural nets, we are capable of accurately recognizing a wide variety of faces, whether or not we’ve been previously exposed to a significant number of people who share a specific set of characteristics.
Another issue is related to the pervasiveness of AI systems: no totalitarian government has previously been capable of spying on its citizens 24/7, but now, with the proliferation of face recognition and similar technologies, this is rapidly changing.
If such technology is prone to bias, it can form the basis for systemic discrimination of minorities, which would then be extremely hard to root out.
Issues such as these are a cause for concern, of course, and need to be duly addressed. However, one question to ask is whether there are certain situations when AI can actually help remove existing biases, rather than risk introducing the new ones.
This is particularly relevant for instances where two conditions are met: first, human judgment is already prone to mistakes, and second, the use of AI doesn’t unlock new scenarios that were previously impossible (and thus AI might create additional issues), but rather helps to improve existing scenarios. One such scenario can be found in Education, or more specifically, in college admissions.
Today, many universities rely on a holistic approach to admissions, which means they can take a wide variety of information about the applicant into account to make their decisions.
The idea is that with holistic admissions, schools get to assess the whole person, instead of focusing on a few select pieces of information, such as test scores or high school GPA, that don’t necessarily provide a full picture of the applicant.
While it’s hard to deny that a few data points, such as test scores, are unlikely to provide a deep enough insight into the person’s abilities, holistic approach, if handled carelessly, could create a whole new variety of issues, as it risks introducing the biases of the college admission officer into the decision-making process, much like AI does in the case of face recognition (the difference here is that the biases are introduced by humans, rather than automated systems).
The more information about the applicants is taken into account during the admissions review, the higher the chances to introduce biases, some of them subtle and minor, and others quite obvious and substantial. That’s not to blame admission officers at college - rather, this is simply an inevitable result of applying subjective human judgment to complex multi-dimensional data sets, especially with a limited amount of time to be spent per each case.
One example of such a bias was asserted in the recent admissions lawsuit against Harvard, where a group representing Asian-American students claimed that “Harvard consistently rated Asian-American applicants lower than others on traits like “positive personality,” likability, courage, kindness and being widely respected”, but it appears more than likely that this isn’t a single case of biases being present in the admissions process, but rather just the most visible one.
Even assuming one agrees that the college admissions process can be biased, what can AI possibly do to help deal with those issues? As it turns out, quite a lot.
Typically, in order for systemic bias to exist, two criteria must be met. First, there has to be a pattern of one group of people being treated substantially differently from all the others. Second, the data demonstrating the existence of such bias must be difficult to collect and analyze - otherwise, it’d be easy to catch and call out bias when it first emerges.
This is one area where AI could prove to be extremely useful, as it could provide colleges (as well as, say, regulators) with powerful tooling to identify and investigate all instances of potential abuse, allowing to analyze the incoming data and monitor stats for all decision-makers involved in the admissions process (school officials, alumni interviewers, etc.) to uncover any worrisome trends.
If, for instance, it turns out it is indeed the case that a group of students of a particular ethnic origin (such as Asian-American students in Harvard’s case) is receiving substantially lower scores from the admissions officers on a particular set of metrics compared to members of all other groups, a decent AI-enabled pattern recognition system would be able to pick it up quite easily and then raise an alarm.
Human involvement would still be needed, of course, in order to formulate the initial hypothesis to test (in this case, that there must be bias involved, as on average various racial groups shouldn’t have such discrepancies in scores), and then to look for further proof if needed, e.g. by analyzing at the scores given to applicants by other parties involved in the admissions process, such as alumni interviewers.
Still, if AI was employed to monitor the incoming data streams for admissions, issues such as this could be brought to university officials’ and regulators’ attention much earlier and would also be much more difficult to ignore.
While the admissions look at a significant amount of information about each applicant, chances are this information can still be enriched, either by making use of additional data sources or by extracting additional insights from the existing data.
One area where AI might help extract additional insights is to tie together the recruiting data for graduating students and their profiles at the admissions stage.
This might be arguably less relevant (and possibly, more controversial) for colleges, but for business or law schools, the value of connecting those disparate sets of data could provide huge value: after all, many of top grad schools pride themselves on their recruiting stats, and try to optimize their processes, even if only implicitly at the moment, to accept students who stand a good chance of securing the best jobs upon graduation.
But what if, instead of trying to decipher the complex relationship between applicants’ profiles and their ability to successfully recruit two or three years down the line manually, the schools would be able to plug all the data they have for the previous cohorts of students, from the moment they apply to school till the moment they live (and possibly, for years afterwards too, as many schools keep good track of their former students’ careers), and leverage AI to help find useful patterns that could serve as a predictor for student’s chances of success, both in school and afterward.
This data could then provide useful insights for admissions officers evaluating future generations of applicants.
Now, one could, of course, argue that, even for professional schools, recruitment stats shouldn’t drive the decision-making process for admitting or rejecting applicants.
This is a valid point, but there are two crucial considerations that should be taken into account here. First, many schools are already doing exactly that, only subjectively and often without enough data to rely upon (or, rather, the data is there, but crunching it to produce useful insights is hard). Second, the insights that AI can produce aren’t really prescriptive by nature — rather, AI can provide additional data points to take into consideration, but it is still up to schools to decide whether to act upon it in any way.
Therefore, using AI to conduct analysis such as this, at worst, wouldn’t lead to additional biases (again, because all decisions are still handled by humans), and at best, might help schools to improve their processes by a wide margin and also uncover some of the biases that might be inherent to existing processes they employ.
Finally, by introducing AI into their processes, schools can find ways to automate some of the more mundane tasks that currently have to be handled by admissions officers, instead, freeing them to focus on evaluating the parts of the applications that truly require their attention and expertise, which could further improve the outcomes for all parties involved.
When it comes to AI, the advancements that we’ve witnessed over the last few years are often stunning. In many cases, however, those have also brought to our attention new questions and dilemmas that now need to be carefully handled.
As we’ve now seen, when implemented carelessly, AI can cause substantial issues, introduce new biases, and even endanger the rights and freedoms we might now take for granted. That being said, AI can also help us as a society to substantially improve our existing processes, and fight our existing inefficiencies and biases, with college admissions being just one example of an area where it stands to make a tremendous impact.
Today, the best universities often pride themselves on relying on thorough and time-tested approaches, especially when it comes to such a crucial process as admissions, and in many ways, they are absolutely correct.
That being said, however, it’s rare to find any effort out there that can’t be further improved, and that is especially true for such a complex — and, let’s face it — often subjective process as college admissions, and as such, it’s one area where AI could provide immediate and significant value, thus creating existing opportunities that both the schools and the companies building and implementing AI solutions should most definitely take time to explore.