160+ Data Science Interview Questions by@alexeygrigorev

March 1st 2020 46,023 reads

A typical interview process for a data science position includes multiple rounds. Often, one of such rounds covers theoretical concepts, where the goal is to determine if the candidate knows the fundamentals of machine learning.

In this post, Iβd like to summarize all my interviewing experience βββ from both interviewing and being interviewed βββ and came up with a list of 160+ theoretical data science questions.

This includes the following topics:

- Linear regression
- Validation
- Classification and logistic regression
- Regularization
- Decision trees
- Random forest
- Gradient boosting trees
- Neural networks
- Text classification
- Clustering
- Ranking: search and recommendation
- Time series

The number of questions in this post might seem overwhelming βββ and it indeed is. Keep in mind that the interview flow is based on what the company needs and what you have worked with, so if you didnβt work with models in time series or computer vision, you shouldnβt get questions about them.

Important: donβt feel discouraged if you donβt know the answers to some of the interview questions. This is absolutely fine.

Finally, to make it simpler, I grouped the questions into three categories, based on difficulty:

- πΆ easy
- ββοΈ medium
- π expert

Thatβs, of course, subjective, and itβs based only on my personal opinion.

Letβs start!

Supervised machineΒ learning

- What is supervised machine learning? πΆ

Linear regression

- What is regression? Which models can you use to solve a regression problem? πΆ
- What is linear regression? When do we use it? πΆ
- Whatβs the normal distribution? Why do we care about it? πΆ
- How do we check if a variable follows the normal distribution? ββοΈ
- What if we want to build a model for predicting prices? Are prices distributed normally? Do we need to do any pre-processing for prices? ββοΈ
- What are the methods for solving linear regression do you know? ββοΈ
- What is gradient descent? How does it work? ββοΈ
- What is the normal equation? ββοΈ
- What is SGD βββ stochastic gradient descent? Whatβs the difference with the usual gradient descent? ββοΈ
- Which metrics for evaluating regression models do you know? πΆ
- What are MSE and RMSE? πΆ

Validation

- What is overfitting? πΆ
- How to validate your models? πΆ
- Why do we need to split our data into three parts: train, validation, and test? πΆ
- Can you explain how cross-validation works? πΆ
- What is K-fold cross-validation? πΆ
- How do we choose K in K-fold cross-validation? Whatβs your favorite K? πΆ

Classification

- What is classification? Which models would you use to solve a classification problem? πΆ
- What is logistic regression? When do we need to use it? πΆ
- Is logistic regression a linear model? Why? πΆ
- What is sigmoid? What does it do? πΆ
- How do we evaluate classification models? πΆ
- What is accuracy? πΆ
- Is accuracy always a good metric? πΆ
- What is the confusion table? What are the cells in this table? πΆ
- What is precision, recall, and F1-score? πΆ
- Precision-recall trade-off ββοΈ
- What is the ROC curve? When to use it? ββοΈ
- What is AUC (AU ROC)? When to use it? ββοΈ
- How to interpret the AU ROC score? ββοΈ
- What is the PR (precision-recall) curve? ββοΈ
- What is the area under the PR curve? Is it a useful metric? ββοΈ
- In which cases AU PR is better than AU ROC? ββοΈ
- What do we do with categorical variables? ββοΈ
- Why do we need one-hot encoding? ββοΈ

Regularization

- What happens to our linear regression model if we have three columns in our data: x, y, z βββ and z is a sum of x and y? ββοΈ
- What happens to our linear regression model if the column z in the data is a sum of columns x and y and some random noise? ββοΈ
- What is regularization? Why do we need it? πΆ
- Which regularization techniques do you know? ββοΈ
- What kind of regularization techniques are applicable to linear models? ββοΈ
- How does L2 regularization look like in a linear model? ββοΈ
- How do we select the right regularization parameters? πΆ
- Whatβs the effect of L2 regularization on the weights of a linear model? ββοΈ
- How L1 regularization looks like in a linear model? ββοΈ
- Whatβs the difference between L2 and L1 regularization? ββοΈ
- Can we have both L1 and L2 regularization components in a linear model? ββοΈ
- Whatβs the interpretation of the bias term in linear models? ββοΈ
- How do we interpret weights in linear models? ββοΈ
- If a weight for one variable is higher than for another βββ can we say that this variable is more important? ββοΈ
- When do we need to perform feature normalization for linear models? When itβs okay not to do it? ββοΈ

Feature selection

- What is feature selection? Why do we need it? πΆ
- Is feature selection important for linear models? ββοΈ
- Which feature selection techniques do you know? ββοΈ
- Can we use L1 regularization for feature selection? ββοΈ
- Can we use L2 regularization for feature selection? ββοΈ

Decision trees

- What are the decision trees? πΆ
- How do we train decision trees? ββοΈ
- What are the main parameters of the decision tree model? πΆ
- How do we handle categorical variables in decision trees? ββοΈ
- What are the benefits of a single decision tree compared to more complex models? ββοΈ
- How can we know which features are more important for the decision tree model? ββοΈ

Random forest

- What is random forest? πΆ
- Why do we need randomization in random forest? ββοΈ
- What are the main parameters of the random forest model? ββοΈ
- How do we select the depth of the trees in random forest? ββοΈ
- How do we know how many trees we need in random forest? ββοΈ
- Is it easy to parallelize training of a random forest model? How can we do it? ββοΈ
- What are the potential problems with many large trees? ββοΈ
- What if instead of finding the best split, we randomly select a few splits and just select the best from them. Will it work? π
- What happens when we have correlated features in our data? ββοΈ

Gradient boosting

- What is gradient boosting trees? ββοΈ
- Whatβs the difference between random forest and gradient boosting? ββοΈ
- Is it possible to parallelize training of a gradient boosting model? How to do it? ββοΈ
- Feature importance in gradient boosting trees βββ what are possible options? ββοΈ
- Are there any differences between continuous and discrete variables when it comes to feature importance of gradient boosting models? π
- What are the main parameters in the gradient boosting model? ββοΈ
- How do you approach tuning parameters in XGBoost or LightGBM? π
- How do you select the number of trees in the gradient boosting model? ββοΈ

Parameter tuning

- Which parameter tuning strategies (in general) do you know? ββοΈ
- Whatβs the difference between grid search parameter tuning strategy and random search? When to use one or another? ββοΈ

Neural networks

- What kind of problems neural nets can solve? πΆ
- How does a usual fully-connected feed-forward neural network work? ββοΈ
- Why do we need activation functions? πΆ
- What are the problems with sigmoid as an activation function? ββοΈ
- What is ReLU? How is it better than sigmoid or tanh? ββοΈ
- How we can initialize the weights of a neural network? ββοΈ
- What if we set all the weights of a neural network to 0? ββοΈ
- What regularization techniques for neural nets do you know? ββοΈ
- What is dropout? Why is it useful? How does it work? ββοΈ

Optimization in neuralΒ networks

- What is backpropagation? How does it work? Why do we need it? ββοΈ
- Which optimization techniques for training neural nets do you know? ββοΈ
- How do we use SGD (stochastic gradient descent) for training a neural net? ββοΈ
- Whatβs the learning rate? πΆ
- What happens when the learning rate is too large? Too small? πΆ
- How to set the learning rate? ββοΈ
- What is Adam? Whatβs the main difference between Adam and SGD? ββοΈ
- When would you use Adam and when SGD? ββοΈ
- Do we want to have a constant learning rate or we better change it throughout training? ββοΈ
- How do we decide when to stop training a neural net? πΆ
- What is model checkpointing? ββοΈ
- Can you tell us how you approach the model training process? ββοΈ

Neural networks for computerΒ vision

- How we can use neural nets for computer vision? ββοΈ
- Whatβs a convolutional layer? ββοΈ
- Why do we actually need convolutions? Canβt we use fully-connected layers for that? ββοΈ
- Whatβs pooling in CNN? Why do we need it? ββοΈ
- How does max pooling work? Are there other pooling techniques? ββοΈ
- Are CNNs resistant to rotations? What happens to the predictions of a CNN if an image is rotated? π
- What are augmentations? Why do we need them? πΆWhat kind of augmentations do you know? πΆHow to choose which augmentations to use? ββοΈ
- What kind of CNN architectures for classification do you know? π
- What is transfer learning? How does it work? ββοΈ
- What is object detection? Do you know any architectures for that? π
- What is object segmentation? Do you know any architectures for that? π

Text classification

- How can we use machine learning for text classification? ββοΈ
- What is bag of words? How we can use it for text classification? ββοΈ
- What are the advantages and disadvantages of bag of words? ββοΈ
- What are N-grams? How can we use them? ββοΈ
- How large should be N for our bag of words when using N-grams? ββοΈ
- What is TF-IDF? How is it useful for text classification? ββοΈ
- Which model would you use for text classification with bag of words features? ββοΈ
- Would you prefer gradient boosting trees model or logistic regression when doing text classification with bag of words? ββοΈ
- What are word embeddings? Why are they useful? Do you know Word2Vec? ββοΈ
- Do you know any other ways to get word embeddings? π
- If you have a sentence with multiple words, you may need to combine multiple word embeddings into one. How would you do it? ββοΈ
- Would you prefer gradient boosting trees model or logistic regression when doing text classification with embeddings? ββοΈ
- How can you use neural nets for text classification? π
- How can we use CNN for text classification? π

Clustering

- What is unsupervised learning? πΆ
- What is clustering? When do we need it? πΆ
- Do you know how K-means works? ββοΈ
- How to select K for K-means? ββοΈ
- What are the other clustering algorithms do you know? ββοΈ
- Do you know how DBScan works? ββοΈ
- When would you choose K-means and when DBScan? ββοΈ

Dimensionality reduction

- What is the curse of dimensionality? Why do we care about it? ββοΈ
- Do you know any dimensionality reduction techniques? ββοΈ
- Whatβs singular value decomposition? How is it typically used for machine learning? ββοΈ

Ranking andΒ search

- What is the ranking problem? Which models can you use to solve them? ββοΈ
- What are good unsupervised baselines for text information retrieval? ββοΈ
- How would you evaluate your ranking algorithms? Which offline metrics would you use? ββοΈ
- What is precision and recall at k? ββοΈ
- What is mean average precision at k? ββοΈ
- How can we use machine learning for search? ββοΈ
- How can we get training data for our ranking algorithms? ββοΈ
- Can we formulate the search problem as a classification problem? How? ββοΈ
- How can we use clicks data as the training data for ranking algorithms? π
- Do you know how to use gradient boosting trees for ranking? π
- How do you do an online evaluation of a new ranking algorithm? ββοΈ

Recommender systems

- What is a recommender system? πΆ
- What are good baselines when building a recommender system? ββοΈ
- What is collaborative filtering? ββοΈ
- How we can incorporate implicit feedback (clicks, etc) into our recommender systems? ββοΈ
- What is the cold start problem? ββοΈ
- Possible approaches to solving the cold start problem? ββοΈπ

Time series

- What is a time series? πΆ
- How is time series different from the usual regression problem? πΆ
- Which models do you know for solving time series problems? ββοΈ
- If thereβs a trend in our series, how we can remove it? And why would we want to do it? ββοΈ
- You have a series with only one variable βyβ measured at time t. How do predict βyβ at time t+1? Which approaches would you use? ββοΈ
- You have a series with a variable βyβ and a set of features. How do you predict βyβ at t+1? Which approaches would you use? ββοΈ
- What are the problems with using trees for solving time series problems? ββοΈ

That was a long list! I hope you found it useful. Good luck with your interviews!

The post is based on this thread on Twitter. Do you know the answers? Consider contributing to this github repository!

Join Hacker Noon

Create your free account to unlock your custom reading experience.