paint-brush
Addressing Bias in AI Models Used for University Admissions Decisionsby@zacamos
110 reads

Addressing Bias in AI Models Used for University Admissions Decisions

by Zac AmosAugust 14th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI is used in college admissions for application screening, predictive analytics, personalized communication, and bias mitigation. However, AI can be biased in these use cases. Schools should take proactive and preventative measures, such as curating historical data and improving AI literacy, to reduce issues with AI bias.
featured image - Addressing Bias in AI Models Used for University Admissions Decisions
Zac Amos HackerNoon profile picture

Poring through university admissions applications takes immense manual labor, and human bias is often the subject of questionable headlines because of poor decision-making. AI could solve this problem, but bias oversights are still plentiful enough to negatively impact a student’s future.

How Is AI Used in College Admissions?

AI does more than scan college applications to see if they meet requirements, though this is its primary application.

Application Screening

Algorithms are customizable depending on a university’s standards for seeking applications that meet the criteria. AI can scan numbers from standardized tests, identify the value of extracurricular commitments and judge this against coursework.

Predictive Analytics

Machine learning can compare previous applicants to incoming ones to judge their chances of success. This is helpful from an administrative perspective because it gives boards insights into potential retention rates in the coming years. The process becomes more accurate the longer a university uses AI.

Personalized Communication

During and after the application process, competent chatbots could be invaluable resources for students. If they have questions about submitting forms or what a prompt means, a chatbot has the knowledge to give helpful responses.

Bias Mitigation

AI algorithms are just as capable of defeating bias as formulating it. For years, humans have turned the other way to accept some applications even if they did not meet a metric. An AI will not. Aspects of the admissions process become more objective because it removes some chances for human-based bias to compromise the process.

How Could AI Develop Bias?

Universities are responsible for collaborating with AI engineers and data scientists to filter out the chance of bias. This is an around-the-clock job, as the algorithms constantly change as they learn from themselves.

Data Bias

These repetitions could cause data bias, which is one of the biggest oversights for admissions. This is when the dataset includes information that promotes harmful training, such as discriminating against certain demographics because of historical tendencies.

Algorithmic Bias

Algorithmic bias is another red flag. This occurs when criteria carry different weights throughout the training process. Is the model starting to prioritize test scores over GPA, or is it skewed to prefer families with alumni? Experts must adjust how the AI views each metric to ensure consistency after repeated learning cycles.


The University of California made a landmark decision to remove standardized test scores from its considerations because of systemic racism and ableism causing disadvantages to students. Years of historical trends like this do not bode well for training AI because it will need to learn how to weigh this metric if necessary in some places. Additionally, the depth of the bias will take a lot of work to overcome.

Feedback Loops

This hints at how dangerous feedback loops can be. Depending on the learning environment AI overseers employ, the model may become more extreme in its outcomes. Positive reinforcement could favor students with specific backgrounds or demographics.


Researchers see this bias problem impacting universities, but graduates will find this extends into job applications. The crossover is immense, proven when researchers enter fake but demographically distinct names in ChatGPT 3.5 against a real job posting. AI put names culturally associated with Black Americans lower than names tied to white men and Asian women.

Emotional Gaps

An estimated 90% of students with over a 4.0 GPA and flawless SAT scores still fail to get into America’s top 10 universities. Adding AI oversights could make matters worse or inconsistent. AI has limited contextual understanding of an applicant’s experiences with too much weight on quantitative interpretations. It may not know how to judge a compelling story in an application essay or students who have to apply using nontraditional methods.

What Can Admissions Officials Do?

Some universities have already been privy to biased AI, and avoiding these fiascos in the future is crucial for maintaining reputation and keeping applicants interested. What proactive and preventive measures can schools take to keep students safe and be judged accurately?


Many AI enthusiasts wonder if curating historical data, potentially skewing it or making the past inaccurate, would improve modern bias. Diverse and representative training alongside continuous monitoring is a more proactive approach to designing bias-free AI for admissions. This includes input from inclusive, interdisciplinary populations and datasets to fill gaps in socioeconomic, ethnic, educational and demographic experiences. Incorporating bias detection and mitigation tools will complement these efforts.


AI professionals should help university leaders become literate in AI bias. They can offer workshops for noticing anomalies and providing feedback for further training. This helps if the admissions AI has integrated transparency, forcing it to explain where it derived its decisions. These insights are roadmaps for all AI users because they trace the source of problematic and biased information.


Universities can also take a mixed approach to admissions, including human interactivity, to judge qualitative aspects. A recent study revealed an AI model was a poor predictor of educational outcomes because of its racial biases. It was incorrect at predicting the academic success of students with Asian backgrounds 73% of the time. AI can look at numbers effectively with minimal error, but humans may be better judges when reading personal statements or conducting interviews.

Students in the Hands of AI

Removing AI bias from college admissions is a deceptively critical issue, as poor data management could disrupt many students’ bright futures. Data scientists and university leaders must prioritize this, as the next-generation workforce relies on its accuracy and integrity to give them a chance at education. Neglect will cause numerous problems for all industries that need experienced professionals walking in the door.