What is Required ? Python, Numpy, Pandas Kaggle titanic dataset : https://www.kaggle.com/c/titanic-gettingStarted/data Goal The machine learning model is supposed to predict who survived during the titanic shipwreck. Here I will show you how to apply preprocessing techniques on the Titanic dataset. Why do we need Preprocessing ? For machine learning algorithms to work, it is necessary to convert the into a data set and dataset must be converted to . You have to encode all the to column vectors with binary values. or NaNs in the dataset is an annoying problem. You have to either drop the missing rows or fill them up with a mean or interpolated values.. raw data clean numeric data categorical lables Missing values Note: Kaggle provides 2 datasets: train and results data separately. Both must have same dimensions for the model. Loading data in pandas To work on the data, you can either load the CSV in excel software or in pandas. Lets load the csv data in pandas. = pd.read_csv( ) df 'train.csv' Lets take a look at the data format below >>> df.info() < ' . . . '> : 891 , 0 890 ( 12 ): 891 - 891 - 891 - 891 - 891 - 714 - 891 - 891 - 891 - 891 - 204 - 889 - class pandas core frame DataFrame Int64Index entries to Data columns total columns PassengerId non null int64 Survived non null int64 Pclass non null int64 Name non null object Sex non null object Age non null float64 SibSp non null int64 Parch non null int64 Ticket non null object Fare non null float64 Cabin non null object Embarked non null object If you carefully observe the above summary of pandas, there are total 891 rows, Age shows only 714 (means missing), Embarked (2 missing) and Cabin missing a lot as well. Object data types are non-numeric so we have to find a way to encode them to numerical values. Dropping Columns which are not useful Lets try to drop some of the columns which many not contribute much to our machine learning model such as Name, Ticket, Cabin etc. = [ , , ] = df.drop(cols, axis= ) cols 'Name' 'Ticket' 'Cabin' df 1 We dropped 3 columns: >>>df.info() PassengerId non- int64 Survived non- int64 Pclass non- int64 Sex non- object Age non- float64 SibSp non- int64 Parch non- int64 Fare non- float64 Embarked non- object 891 null 891 null 891 null 891 null 714 null 891 null 891 null 891 null 889 null Dropping rows having missing values Next if we want we can drop all rows in the data that has missing values (NaN). You can do it like = df.dropna() df >>>df.info() Int64Index: entries, to Data columns (total columns): PassengerId non- int64 Survived non- int64 Pclass non- int64 Sex non- object Age non- float64 SibSp non- int64 Parch non- int64 Fare non- float64 Embarked non- object 712 0 890 9 712 null 712 null 712 null 712 null 712 null 712 null 712 null 712 null 712 null Problem with dropping rows having missing values After dropping rows with missing values we find that the dataset is reduced to 712 rows from 891, which means we are . Machine learning models need data for training to perform well. So we preserve the data and make use of it as much as we can. We will see it later. wasting data Creating Dummy Variables Now we convert the Pclass, Sex, Embarked to columns in pandas and drop them after conversion. dummies = [] cols = [ , , ] in dummies. (pd.get_dummies(df[ ])) 'Pclass' 'Sex' 'Embarked' for col col s: append col then titanic_dummies = pd.concat(dummies, =<span = >1</span>) axis class "s1" We have 8 columns transformed to columns. 1,2,3 represents passenger class. finally we concatenate to the original dataframe columnwise = pd.concat((df,titanic_dummies), axis= ) df 1 Now that we converted Pclass, Sex, Embarked values into columns, we drop the redundant same columns from the dataframe = df.drop([ , , ], axis= ) df 'Pclass' 'Sex' 'Embarked' 1 Lets take a look on the new dataframe >>>df.info() PassengerId non- int64 Survived non- int64 Age non- float64 SibSp non- int64 Parch non- int64 Fare non- float64 non- float64 non- float64 non- float64 female non- float64 male non- float64 C non- float64 Q non- float64 S non- float64 891 null 891 null 714 null 891 null 891 null 891 null 1 891 null 2 891 null 3 891 null 891 null 891 null 891 null 891 null 891 null Taking Care of Missing Data All is good, except age which has lots of missing values. Lets compute a median or interpolate() all the ages and fill those missing age values. Pandas has a interpolate() function that will replace all the missing NaNs to interpolated values. df[ ] = df[ ].interpolate() 'Age' 'Age' Now lets observe the data columns. Notice age which is interpolated now with imputed new values. >>>df.info() Data columns (total columns): PassengerId non- int64 Survived non- int64 Age non- float64 SibSp non- int64 Parch non- int64 Fare non- float64 non- float64 non- float64 non- float64 female non- float64 male non- float64 C non- float64 Q non- float64 14 891 null 891 null 891 null 891 null 891 null 891 null 1 891 null 2 891 null 3 891 null 891 null 891 null 891 null 891 null Converting the dataframe to numpy Now that we have converted all the data to numeric, its time for preparing the data for machine learning models. This is where scikit and numpy come into play: X = Input set with 14 attributes y = Small y Output, in this case ‘Survived’ Now we convert our dataframe from pandas to numpy and we assign input and output = df.values = df[ ].values X y 'Survived' X has still Survived values in it, which should not be there. So we drop in numpy column which is the 1st column. = np. ( , 1, axis=1) X delete X Dividing data set into training set and test set Now that we are ready with X and y, lets split the dataset for 70% Training and 30% test set using scikit model_selection sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, =0.3, =0) from test_size random_state And That's about it folks. You have learned how to preprocess data in the titanic dataset. So go on, try it for yourself and start making your own predictions.