How To Measure the Accuracy Of A Predictive Model Or Algorithm Part 1.

Written by SeattleDataGuy | Published 2018/05/28
Tech Story Tags: machine-learning | data-science | analytics | data | programming

TLDRvia the TL;DR App

When developing predictive models and algorithms, whether linear regression or ARIMA models it is important to quantify how well the model fits to the future observations. One of the simplest methods of calculating how correct a model is uses the error between the predicted value and the actual value. From there, there are several methodologies that take this difference and further exploit meaning from it. Quantifying the accuracy of an algorithm is an important step to justifying the usage of the algorithm in product.

We will be using the function accuracy from the R programming language as our basis. The output is depicted below, as you may notice, it has several abbreviations that might not seem so friendly. We will be going through some of them below. In addition, you can watch us explain the same errors in video format in R Studio!

Mean Absolute Error(MAE)

The mean absolute error is one of the simpler errors to understand. It takes the absolute difference between the actual and forecasted values and finds the average. Finding the absolute value is important because it doesn’t allow for any form of cancellation of error values. For instance, if you were to take the average of 1 and -1 then you would have an average value of 0 because the 1 and -1 would essentially cancel each other out.

To avoid this we use the absolute value. Now we wanted to demo how you find the MAE both mathematically and using SQL. You could use the formula below for SQL and it would find the same value as the MAE. Plus, we feel like it might simplify all the complex math symbols you see in the next image.

Avg(Abs(Actual — Forecast))

Root Mean Squared Error (RMSE)

The root mean squared error seems somewhat similar to the MAE. They both take the difference between the actual and the forecast. However, the RMSE also then squares the difference, finds the average of all the squares and then finds the square root. Now it might seem like the action of squaring and then taking the square root may cancel each other out. This isn’t the case. The RMSE essentially punishes larger errors. Another way to phrase that is that it puts a heavier weight on larger errors.

For example, let’s compare the two tables below. If you notice, the MAE and RMSE are nearly identical for both table 1 and table 2. However, the difference between the two values, even when increase in error is only 1 gets slightly larger as denoted in the first row. If the error were 5, 6, or another larger number, the difference between the RMSE and MAE would grow even larger. This is because you square the number. This creates an exponential change in the base number. Thus, an error difference of 1 has a greater effect for every increase e.g. from (3 to 4 then from 4 to 5). This is why it essentially punishes larger errors.

Below is again the SQL and mathematical notation of the RMSE.

Sqrt(Avg(Power(Actual -Forecast)))

Mean Absolute Percentage Error(MAPE)

The one issue you may run into with both the RMSE and MAE is that both values can just become large numbers that don’t really say all that much. What does a RMSE of 597 mean? How bad or good is that? Part of this is because you need to compare it to other models. Another issue is the fact that the RMSE will be based off the difference of the actual and forecast, which depending on your data could be on on very different scales. For instance, if you are creating a model for a billion dollar corporation your error will be much larger than one for a company that only grosses 6 figures.

In this case, the mean absolute percentage error is good method in the sense that it is the percentage of the error compared to the actual value. This provides more of a standardized error measure. For instance, if the error was 10 and the actual value was 100, then the percentage would be 10% compared to if the error was 100 and the actual value was 1000, the measure would still be 10%.

This provides a little more context than the RMSE and the MAE which can help better explain the model’s accuracy.

The SQL and mathematical notation are listed below

Avg(Abs(Actual-Forecast)/Abs(Actual)) *100

Mean Absolute Scaled Error(MASE)

The mean scaled error is the last error that we will be discussing today. The MASE is slightly different than the other three. It compares the MAE of your current model you are testing to the MAE of the naive model. The naive model just forecasts the previous observation to the current observation.

The MASE is the ratio of the MAE over the MAE of the naive model. In this way, when the MASE is equal to 1 that means that your model has the same MAE as the naive model, so you almost might as well pick the naive model. If the model’s MASE is .5, that would suggest that your model is about 2x as good as just picking the previous value.

This error skips the step of running several models and instead automatically compares your model to another one. It provides a little more context than the MAE, RMSE and MAPE.

Overall, these four errors create a story that can help decide whether your algorithm or model is a good fit. There are still other factors to consider, but I do hope this helped simplify these strange abbreviations. If you have any more questions, or have other statistics or programming questions. Please feel free to reach out!

Other great read about data science:

What is A Decision Tree

How Algorithms Can Become Unethical and Biased

How Men’s Wearhouse Could Use Data Science Cont

Introduction To Time Series In R

How To Develop Robust Algorithms

4 Must Have Skills For Data Scientists


Published by HackerNoon on 2018/05/28