Too Long; Didn't Read
Sometimes in data science and machine learning we encounter problems of imbalanced classes. These are problems when one class might have more instances than another. One metric that helps with this problem is Matthew’s Correlation Coefficient. The MCC takes values between -1 and 1, a score of 1 indicates perfect agreement. But how does the MCC compare against other popular metrics for imbalanced class problems? How do they compare against the F1-score? An argument is made in favour of MCC, but Ive found that in practice both metrics give similar results.