Confusion Matrix is one of the core foundations of evaluating AI model performance, and Accuracy is the simplest metric built on top of it. Today we’ll break down what these terms mean and how they are calculated.
Why do we even need metrics in AI models? Most often, they are used to compare models with each other while separating the evaluation from business metrics. If you look only at business outcomes (like customer NPS or revenue), you might completely misinterpret what actually caused the change.
For example, you release a new version of your model, and it performs better (its model metrics improved), but at the same time the economy crashes and people stop buying your product (your revenue drops). If you didn’t measure model metrics separately, you could easily assume that the new version harmed your business — even though the real reason was an external factor. This is a simple example, but it clearly shows why model metrics and business metrics must be considered independently.
Before we continue, it’s important to understand that model metrics differ depending on the type of task:
- Classification — when you predict which category an observation belongs to. For example, you see an image and must decide what’s on it. The answer could be one of several classes: a dog, a cat, or a mouse. A special case of classification is binary classification — when the answer is only 0 or 1. For instance: “Is this a cat or not a cat?”
- Regression — when you predict a numerical valuebased on past data.
For example, yesterday Bitcoin cost $32,000, and you forecast it to be $34,533 tomorrow. In other words, you are predicting a number.
Since these tasks are different, the metrics used to evaluate them are also different. In this post, we’ll focus specifically on classification.
Confusion Matrix
First, let’s look at the table below. It’s called the confusion matrix. Imagine our model predicts whether someone will buy an elephant. Then we actually try to sell elephants to people — and in reality, some do buy, and some don’t.
So, the results of such an evaluation can be divided into four groups:
- The model predicted that a person would buy the car — and he actually bought it → True Positive (TP)
- The model predicted that a person would not buy the car, but he ended up buying it anyway → False Negative (FN)
- The model predicted that a person would buy the elephant, but when offered, they did not → False Positive (FP)
- The model predicted that a person would not buy the elephant — and indeed, they didn’t → True Negative (TN)
This is the foundation for many other metrics.
Accuracy
Now let’s look at the simplest and most basic performance metric — the one clients usually mention when they don’t really understand machine learning. This metric is called accuracy.
Looking at the confusion matrix above, accuracy is calculated as:
Accuracy = (TP + TN) / (TP + TN + FP + FN)
Accuracy is rarely sufficient on its own, because it can give a misleading impression of model quality when the dataset is imbalanced.
For example, imagine we have:
100 images of cats 10 images of dogs
Let’s simplify: cats → 0, dogs → 1 (so this is binary classification). Clearly, cats appear ten times more often — meaning the dataset is not balanced.
Suppose our model correctly classified:
90 cats correctly → TN = 90
10 cats incorrectly → FN = 10
5 dogs correctly → TP = 5
5 dogs incorrectly → FP = 5
Plugging into the formula:
Accuracy = (5 + 90) / (5 + 90 + 5 + 10)
Accuracy = 95 / 110 ≈ 86.4%
Seems like a solid result! 86% of the predictions are correct!
But notice something important: if we had simply predicted “every image is a cat”, our accuracy would be 90% — without having any model at all.
So, even though our model seems to achieve a decent accuracy (~86%), it is actually performing poorly.
Conclusion
In the next article, I’ll go deeper into the more practical metrics: Precision, Recall, F-score, ROC-AUC. After that, we’ll cover regression metrics such as MSE, RMSE, MAE, R², MAPE, SMAPE.
Follow me — check my profile for links!
