Classification algorithms learn how to assign class labels to examples (observations or data points), although their decisions can appear opaque. This tutorial is authored by KVS Setty You can find the complete source code at my git repository A popular diagnostic for understanding the decisions made by a classification algorithm is the . This is a plot that shows how a trained machine learning algorithm predicts a coarse grid across the input feature space. decision surface A decision surface plot is a powerful tool for understanding how a given model “sees” the prediction task and how it has decided to divide the input feature space by class label. In this tutorial, you will discover how to plot a decision surface for a classification machine learning algorithm. After completing this tutorial, you will know: Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space. How to plot a decision surface for using crisp class labels for a machine learning algorithm. How to plot and interpret a decision surface using predicted probabilities. Let’s get started. Tutorial Overview This tutorial is divided into four parts; they are: Decision Surface Dataset and Model Plot a Decision Surface Plot the decision surface of a decision tree on the iris dataset Decision Surface Classification machine learning algorithms learn to assign labels to input examples (observations). Consider numeric input features for the classification task defining a continuous input feature space. We can think of each input feature defining an axis or dimension on a feature space. Two input features would define a feature space that is a plane, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume.If there were input variables, the feature sapce be a . Diffcult to visualize spaces beyond three dimensions. n n-dimensional hyper plane Each point in the space can be assigned a class label. In terms of a two-dimensional feature space, we can think of each point on the planing having a different color, according to their assigned class. The goal of a classification algorithm is to learn how to divide up the feature space such that labels are assigned correctly to points in the feature space, or at least, as correctly as is possible. This is a useful geometric understanding of predictive classification modeling. We can take it one step further. Once a classification machine learning algorithm divides a feature space, we can then classify each point in the feature space, on some arbitrary grid, to get an idea of how exactly the algorithm chose to divide up the feature space. This is called a or , and it provides a diagnostic tool for understanding a model on a predictive classification modeling task. decision surface decision boundary Although the notion of a “surface” suggests a two-dimensional feature space, the method can be used with feature spaces with more than two dimensions, where a surface is created for each pair of input features. Now that we are familiar with what a decision surface is, next, let’s define a dataset and model for which we later explore the decision boundry. Dataset and Model In this section, we will define a classification task and predictive model to learn the task. Synthetic Classification Dataset We can use the to define a classification task with a two-dimensional numerical feature space and each point assigned one of two class labels, e.g. a binary classification task. make_blobs() scikit-learn function ... # generate dataset X, y = make_blobs(n_samples= , centers= , n_features= , random_state= , cluster_std= ) 1000 2 2 26 3 Once defined, we can then create a scatter plot of the feature space with the first feature defining the x-axis, the second feature defining the y-axis, and each sample represented as a point in the feature space. We can then color points in the scatter plot according to their class label as either 0 or 1. ... # create scatter plot samples each = , y= , hue= , data=data) for from . % . ( class import numpy as np import pandas as pd import Matplotlib pyplot as plt matplotlib inline Import seaborn as sns sns scatterplot x "x1" "x2" 'class' cobing all this together, the complete example of defining and plotting a synthetic classification dataset is listed below. In [1]: # generate binary classification dataset and plot numpy np pandas pd matplotlib.pyplot plt %matplotlib inline seaborn sns sklearn.datasets make_blobs # generate dataset X, y = make_blobs(n_samples= , centers= , n_features= , random_state= , cluster_std= ) import as import as import as import as from import 1000 2 2 1 3 In [2]: X_df = pd.DataFrame(X, columns=[ , ]) X_df.head() 'x1' 'x2' Out[2]: In [3]: y_df = pd.DataFrame(y, columns=[ ]) "class" In [4]: y_df.head() Out[4]: In [5]: = [ , y_df] frames X_df = pd.concat( , =1) data frames axis .head() data Out[5]: In [6]: # create scatter plot samples each = , y= , hue= , data=data) for from . ( class sns scatterplot x "x1" "x2" 'class' Out[6]: <matplotlib.axes._subplots.AxesSubplot at > 0x1a62b1ec0f0 Running the example above created the dataset, then plots the dataset as a scatter plot with points colored by class label. We can see a clear separation between examples from the two classes and we can imagine how a machine learning model might draw a line to separate the two classes, e.g. perhaps a diagonal line right through the middle of the two groups. Fit Predictive Classification Model We can now fit a model on our dataset. In this case, we will fit a logistic regression algorithm because we can predict both crisp class labels and probabilities, both of which we can use in our decision surface. We can define the model, then fit it on the training dataset. ... model = LogisticRegression() model.fit(X, y) # define the model # fit the model Once defined, we can use the model to make a prediction for the training dataset to get an idea of how well it learned to divide the feature space of the training dataset and assign labels. pyhton ... # make predictions yhat = model.predict(X) The predictions can be evaluated using classification accuracy. ... # evaluate the predictions acc = accuracy_score(y, yhat) print( % acc) 'Accuracy: %.3f' combining all this together, the complete example of fitting and evaluating a model on the synthetic binary classification dataset is listed below. In [7]: # example fitting and evaluating a model on the classification dataset sklearn.datasets make_blobs sklearn.linear_model LogisticRegression sklearn.metrics accuracy_score # generate dataset X, y = make_blobs(n_samples= , centers= , n_features= , random_state= , cluster_std= ) # define the model model = LogisticRegression() # fit the model model.fit(X, y) # make predictions yhat = model.predict(X) # evaluate the predictions acc = accuracy_score(y, yhat) print( % acc) of from import from import from import 1000 2 2 1 3 'Accuracy: %.3f' Accuracy: 0.972 Running the example fits the model and makes a prediction for each example. Your specific results may vary given the stochastic nature of the learning algorithm. Try running the example a few times. In this case, we can see that the model achieved a performance of about 97.2 percent. Now that we have a dataset and model, let’s explore how we can develop a decision surface. Plot a Decision Surface We can create a decision boundry by fitting a model on the training dataset, then using the model to make predictions for a grid of values across the input domain. Once we have the grid of predictions, we can plot the values and their class label. A scatter plot could be used if a fine enough grid was taken. A better approach is to use a contour plot that can interpolate the colors between the points. The can be used. contourf() Matplotlib function This requires a few steps. First, we need to define a grid of points across the feature space. To do this, we can find the minimum and maximum values for each feature and expand the grid one step beyond that to ensure the whole feature space is covered. ... # define bounds the domain min1, max1 = X[:, ].min() , X[:, ].max()+ min2, max2 = X[:, ].min() , X[:, ].max()+ of 0 -1 0 1 1 -1 1 1 We can then create a uniform sample across each dimension using the function at a chosen resolution. We will use a resolution of 0.1 in this case. arange() ... # define the x and y scale x1grid = arange(min1, max1, ) x2grid = arange(min2, max2, ) 0.1 0.1 Now we need to turn this into a grid. We can use the to create a grid from these two vectors. meshgrid() NumPy function If the first feature x1 is our x-axis of the feature space, then we need one row of x1 values of the grid for each point on the y-axis. Similarly, if we take x2 as our y-axis of the feature space, then we need one column of x2 values of the grid for each point on the x-axis. The function will do this for us, duplicating the rows and columns for us as needed. It returns two grids for the two input vectors. The first grid of x-values and the second of y-values, organized in an appropriately sized grid of rows and columns across the feature space. meshgrid() ... # create all the lines and rows the grid xx, yy = meshgrid(x1grid, x2grid) of of We then need to flatten out the grid to create samples that we can feed into the model and make a prediction. To do this, first, we flatten each grid into a vector. ... # flatten each grid to a vector r1, r2 = xx.flatten(), yy.flatten() r1, r2 = r1.reshape((len(r1), )), r2.reshape((len(r2), )) 1 1 Then we stack the vectors side by side as columns in an input dataset, e.g. like our original training dataset, but at a much higher resolution. ... # horizontal stack vectors to create x1,x2 input the model grid = hstack((r1,r2)) for We can then feed this into our model and get a prediction for each point in the grid. ... # make predictions the grid yhat = model.predict(grid) for So far, so good. We have a grid of values across the feature space and the class labels as predicted by our model. Next, we need to plot the grid of values as a contour plot. The takes separate grids for each axis, just like what was returned from our prior call to meshgrid(). Great! contourf() function So we can use xx and yy that we prepared earlier and simply reshape the predictions (yhat) from the model to have the same shape. ... # reshape the predictions back into a grid zz = yhat.reshape(xx.shape) We then plot the decision surface with a two-color colormap. ... # plot the grid x, y and z values a surface pyplot.contourf(xx, yy, zz, cmap= ) of as 'Paired' We can then plot the actual points of the dataset over the top to see how well they were separated by the logistic regression decision surface. The complete example of plotting a decision surface for a logistic regression model on our synthetic binary classification dataset is listed below. In [8]: numpy as np from sklearn.datasets make_blobs from sklearn.linear_model LogisticRegression X, = make_blobs( min1, = X[:, ].min()- , X[:, ].max()+ min2, = X[:, ].min()- , X[:, ].max()+ = np.arange(min1, max1, ) = np.arange(min2, max2, ) xx, = np.meshgrid(x1grid, x2grid) r1, = xx.flatten(), yy.flatten() r1, = r1.reshape((len(r1), )), r2.reshape((len(r2), )) = np.hstack((r1,r2)) = LogisticRegression() model.fit(X, y) = model.predict(grid) = yhat.reshape(xx.shape) plt.contourf(xx, yy, zz, for class_value range( ): = np.where( == class_value) plt.scatter(X[row_ix, ], X[row_ix, ], # decision surface for logistic regression on a binary classification dataset import import import # generate dataset y n_samples=1000, centers=2, n_features=2, random_state=1, cluster_std=3) # define bounds of the domain max1 0 1 0 1 max2 1 1 1 1 # define the x and y scale x1grid 0.1 x2grid 0.1 # create all of the lines and rows of the grid yy # flatten each grid to a vector r2 r2 1 1 # horizontal stack vectors to create x1,x2 input for the model grid # define the model model # fit the model # make predictions for the grid yhat # reshape the predictions back into a grid zz # plot the grid of x, y and z values as a surface cmap='Paired') # create scatter plot for samples from each class in 2 # get row indexes for samples with this class row_ix y # create scatter of these samples 0 1 cmap='Paired') We can add more depth to the decision surface by using the model to predict probabilities instead of class labels. ... # make predictions the grid yhat = model.predict_proba(grid) # keep just the probabilities = yhat[:, ] for for 0 class yhat 0 When plotted, we can see how confident or likely it is that each point in the feature space belongs to each of the class labels, as seen by the model. We can use a that has gradations, and show a legend so we can interpret the colors. different color map ... # plot the grid x, y and z values a surface c = pyplot.contourf(xx, yy, zz, cmap= ) # add a legend, called a color bar pyplot.colorbar(c) of as 'RdBu' The complete example of creating a decision surface using probabilities is listed below. In [9]: # probability decision surface logistic regression on a binary classification dataset numpy np sklearn.datasets make_blobs sklearn.linear_model LogisticRegression # generate dataset X, y = make_blobs(n_samples= , centers= , n_features= , random_state= , cluster_std= ) # define bounds the domain min1, max1 = X[:, ].min() , X[:, ].max()+ min2, max2 = X[:, ].min() , X[:, ].max()+ # define the x and y scale x1grid = np.arange(min1, max1, ) x2grid = np.arange(min2, max2, ) # create all the lines and rows the grid xx, yy = np.meshgrid(x1grid, x2grid) # flatten each grid to a vector r1, r2 = xx.flatten(), yy.flatten() r1, r2 = r1.reshape((len(r1), )), r2.reshape((len(r2), )) # horizontal stack vectors to create x1,x2 input the model grid = np.hstack((r1,r2)) # define the model model = LogisticRegression() # fit the model model.fit(X, y) # make predictions the grid yhat = model.predict_proba(grid) # keep just the probabilities = yhat[:, ] # reshape the predictions back into a grid zz = yhat.reshape(xx.shape) # plot the grid x, y and z values a surface c = plt.contourf(xx, yy, zz, cmap= ) # add a legend, called a color bar plt.colorbar(c) # create scatter plot samples each = np.where(y == class_value) # create scatter these samples plt.scatter(X[row_ix, ], X[row_ix, ], cmap= ) for import as from import from import 1000 2 2 1 3 of 0 -1 0 1 1 -1 1 1 0.1 0.1 of of 1 1 for for for 0 class yhat 0 of as 'RdBu' for from (2): # class for class_value in range get row indexes for samples with this class row_ix of 0 1 'Paired' Running the example predicts the probability of class membership for each point on the grid across the feature space and plots the result. Here, we can see that the model is unsure (lighter colors) around the middle of the domain, given the sampling noise in that area of the feature space. We can also see that the model is very confident (full colors) in the bottom-left and top-right halves of the domain. Together, the crisp class and probability decision surfaces are powerful diagnostic tools for understanding your model and how it divides the feature space for your predictive modeling task. Plot the decision surface of a decision tree on the iris dataset Plot the decision surface of a decision tree trained on pairs of features of the iris dataset. See for more information on the estimator. decision tree For each pair of iris features, the decision tree learns decision boundaries made of combinations of simple thresholding rules inferred from the training samples. We also show the tree structure of a model built on all of the features. In [10]: numpy np matplotlib.pyplot plt sklearn.datasets load_iris sklearn.tree DecisionTreeClassifier, plot_tree # Parameters n_classes = plot_colors = plot_step = # Load data iris = load_iris() pairidx, pair enumerate([[ , ], [ , ], [ , ], [ , ], [ , ], [ , ]]): # We only take the two corresponding features X = iris.data[:, pair] y = iris.target # Train clf = DecisionTreeClassifier().fit(X, y) # Plot the decision boundary plt.subplot( , , pairidx + ) x_min, x_max = X[:, ].min() - , X[:, ].max() + y_min, y_max = X[:, ].min() - , X[:, ].max() + xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) plt.tight_layout(h_pad= , w_pad= , pad= ) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) cs = plt.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu) plt.xlabel(iris.feature_names[pair[ ]]) plt.ylabel(iris.feature_names[pair[ ]]) # Plot the training points i, color zip(range(n_classes), plot_colors): idx = np.where(y == i) plt.scatter(X[idx, ], X[idx, ], c=color, label=iris.target_names[i], cmap=plt.cm.RdYlBu, edgecolor= , s= ) plt.suptitle( ) plt.legend(loc= , borderpad= , handletextpad= ) plt.axis( ) plt.figure() clf = DecisionTreeClassifier().fit(iris.data, iris.target) plot_tree(clf, filled=True) plt.show() import as import as from import from import 3 "ryb" 0.02 for in 0 1 0 2 0 3 1 2 1 3 2 3 2 3 1 0 1 0 1 1 1 1 1 0.5 0.5 2.5 0 1 for in 0 1 'black' 15 "Decision surface of a decision tree using paired features" 'lower right' 0 0 "tight" Summary In this tutorial, you discovered how to plot a decision surface for a classification machine learning algorithm. Specifically, you learned: Decision surface is a diagnostic tool for understanding how a classification algorithm divides up the feature space. How to plot a decision surface for using crisp class labels for a machine learning algorithm. How to plot and interpret a decision surface using predicted probabilities. Do you have any questions? Ask your questions in the comments section of the post, I try to do my best to answer. Previously published at h ttps://kvssetty.com/plot-a-decision-surface-for-machine-learning-algorithms-in-python/