paint-brush
Modern World Problems of ML in Production and How to Cope With Themby@viktoriaerokhina
10,766 reads
10,766 reads

Modern World Problems of ML in Production and How to Cope With Them

by VictoriaDecember 8th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Deploying machine learning models in real-world scenarios brings challenges like model drift, scalability issues, interpretability concerns, data privacy, and the need for continuous integration/deployment (CI/CD) pipelines. Solutions involve monitoring and retraining for model drift, optimizing model architectures, leveraging hardware accelerators, and implementing explainable AI for model interpretability. Maintaining data privacy involves techniques like differential privacy and federated learning. Establishing robust CI/CD pipelines is crucial, with tools like MLflow and Kubeflow aiding in the process. Real-world examples from companies like Amazon, MobiDev, Citibank, Google, and Netflix illustrate the practical application of these solutions, emphasizing the evolving nature of machine learning challenges and solutions.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Modern World Problems of ML in Production and How to Cope With Them
Victoria HackerNoon profile picture

Machine learning has become an incredibly useful tool for tackling all kinds of problems, from image recognition to language processing and beyond. But taking a machine learning model from the lab and putting it to work in the real world comes with a whole host of challenges. In this article, we will look at some of the most pressing issues that arise when deploying machine learning systems in production environments and discuss potential solutions with examples of real systems that have grappled with these problems.


1. Model Drift and Non-stationarity

In many real-world scenarios, the underlying data distribution can change over time. This phenomenon, known as concept drift or model drift, means that a model trained on past data might become less accurate or relevant as new data comes in. For example, a model that predicts customer churn based on historical data might not perform well if the customer behavior changes due to external factors, such as market trends, competitors, or regulations.


To handle this issue, regular monitoring and periodic retraining are often required. Monitoring involves measuring the performance of the model on new data and detecting any significant changes or anomalies. Retraining involves updating the model with new data or using techniques such as online learning or transfer learning to adapt the model to the changing environment.


A good illustration of a system that deals with model drift is Amazon Personalize, which provides personalized recommendations for e-commerce customers. Amazon Personalize monitors the performance of its models and automatically retrains them with new data to keep them up-to-date and relevant.



2. Scalability and Latency

Depending on the complexity of the machine learning model, it might not always be able to respond quickly enough, leading to scalability and latency issues. For example, a model that generates natural language responses based on user queries might take too long to process large or complex inputs, resulting in poor user experience or lost opportunities.


To address these challenges, there are several viable solutions, such as:


  • Efficient model architectures: Designing models that are simpler, smaller, or more efficient can reduce the computational cost and improve the inference speed. For example, choosing convolutional neural networks (CNNs) architecture for some tasks, such as image classification, can enable parallel processing.


  • Hardware accelerators: Using specialized hardware devices, such as GPUs or TPUs, can speed up the computation and reduce the power consumption of ML models. For instance, Google’s BERT model, which is a large pre-trained language model for natural language understanding tasks, can run much faster on TPUs than on CPUs.


  • Optimizing for inference: Applying techniques such as quantization, pruning, or distillation can reduce the size or complexity of ML models without sacrificing much accuracy. For example, using on runtime, which is a cross-platform inference engine for ONNX models, one can optimize the performance of ML models across different hardware platforms.


One system that faces scalability and latency issues is MobiDev, which provides software development services for various industries. MobiDev uses machine learning to create solutions for manufacturing, healthcare, e-commerce, and more. MobiDev deals with large volumes of data and complex models that require high performance and low latency. MobiDev uses efficient model architectures, hardware accelerators, and optimizing techniques to ensure the quality and speed of its solutions.


Another system that deals with these issues is Indeed, which is a leading job search platform. Indeed uses machine learning to analyze millions of job postings and resumes and provide relevant matches for job seekers and employers. Indeed has to handle high traffic and diverse queries that demand fast and accurate responses. Indeed uses scalable cloud infrastructure, distributed computing, and caching techniques to improve its scalability and latency.


3. Model Interpretability and Trust

In many industries, especially those that are heavily regulated (like finance or healthcare), decision-makers must understand and trust model predictions. Black-box models like deep neural networks can be hard to interpret, which can hinder their acceptance and deployment. For example, a model that approves or rejects loan applications based on complex features might not be able to explain why it made a certain decision, which can raise ethical or legal concerns.


To overcome this barrier, efforts in explainable AI (XAI) aim to make models more transparent and interpretable. There are different approaches to achieve this goal, such as:


  • Global explanations: Providing an overall understanding of how the model works or what features are important for its predictions; for example, using SHAP values, which measure the contribution of each feature to the prediction outcome.


  • Local explanations: Explaining a specific prediction or instance; in particular, using LIME, which perturbs the input and observes how the prediction changes to identify the most influential features.


  • Counterfactual explanations: Providing an alternative scenario or input that would lead to a different prediction or outcome; for instance, using DiCE, generates diverse counterfactual examples that satisfy certain user-specified criteria.


One instance of a system that requires model interpretability and trust is Citibank, which is a global bank that offers various financial services. Citibank uses machine learning for fraud detection and risk management. Citibank needs to explain its models and decisions to its customers and regulators and ensure compliance with ethical and legal standards. Citibank uses Feezai’s anomaly detection system, which provides explainable AI solutions for financial data.



4. Data Privacy and Security

Managing sensitive data poses both ethical and legal challenges. For models trained on user data, there is a risk of inadvertently exposing confidential information or being vulnerable to adversarial attacks. For example, a model that generates captions for images might leak personal details of the users or their photos, or a model that recognizes faces might be fooled by malicious inputs that manipulate its predictions.


To address some of these concerns, the following techniques can be utilized:


  • Differential privacy: Adding noise or randomness to the data or the model to preserve the privacy of individual records or users; in particular, using TensorFlow Privacy, which implements differentially private stochastic gradient descent for training ML models.


  • Federated learning: Distributing the training process across multiple devices or nodes that hold local data, without requiring a centralized server or data collection; in particular, using TensorFlow Federated, which provides a framework for federated learning and computation.


However, implementing these techniques in real-world systems can be complex and challenging, as they involve trade-offs between privacy, accuracy, and efficiency. One example of a system that faces data privacy and security issues is Google, which is a technology giant that offers various products and services based on user data. Google uses machine learning for various purposes, such as search, ads, maps, photos, assistants, and more. Google must protect the privacy and security of its users’ data from unauthorized access or misuse. Google uses differential privacy and federated learning to train its models without compromising its users’ data.


MobiDev is another company that oversees sensitive data from its clients and users, such as medical records, personal information, or financial transactions. MobiDev uses encryption, authentication, authorization, and auditing techniques to ensure the privacy and security of its data.


5. Continuous Integration and Deployment (CI/CD)

Unlike traditional software, where updates are usually deterministic and straightforward, updating machine learning models involves retraining on new data, which can lead to different behaviours or even potential regressions in performance. For example, a model that detects spam emails might become less effective or more prone to errors after being retrained on new data that contains new types of spam or legitimate emails.


To ensure the quality and reliability of ML models in production, establishing a robust CI/CD pipeline for machine learning is crucial but also challenging.


A typical CI/CD pipeline for machine learning includes the following steps:


  • Data validation: Checking the quality and consistency of the data used for training and inference, such as missing values, outliers, or schema changes.


  • Model validation: Evaluating the performance and robustness of the model on various metrics and scenarios, such as accuracy, fairness, or adversarial robustness.


  • Model testing: Testing the functionality and compatibility of the model with the production environment, such as input/output formats, dependencies, or hardware requirements.


  • Model deployment: Deploying the model to the production environment, such as cloud platforms, edge devices, or web applications.


Model monitoring: Monitoring the behavior and performance of the model in production, such as prediction errors, anomalies, or drifts.


To facilitate these steps, there are various tools and frameworks available, such as:


  • MLflow: An open-source platform for managing the end-to-end ML lifecycle, from data preparation to model deployment and monitoring.


  • Kubeflow: An open-source platform for deploying scalable and portable ML pipelines on Kubernetes clusters.


  • Seldon Core: An open-source platform for deploying and managing ML models on Kubernetes clusters with various features, such as logging, metrics, or explainability.

Netflix, which is a leading streaming service that offers various content based on user preferences, is one example of a system that uses CI/CD for machine learning. Netflix uses machine learning for various purposes, such as personalization, recommendation, search, and content delivery. Netflix has a sophisticated CI/CD pipeline for machine learning, which includes data validation, model validation, model testing, model deployment, and model monitoring.



Conclusion

Machine learning is not only a fascinating tool for solving various problems but also a distinctive field of exploration and innovation. As we deploy machine learning models in production, we encounter numerous challenges and opportunities that require continuous research and development. In this article, we have discussed some of the modern-world problems of ML in production and how to deal with them, with some recent real-world examples. We have also provided several examples of tools and frameworks that can help address these challenges.


This is not the end of the story, however. Machine learning is constantly evolving and expanding, and so are the problems and solutions. As Albert Einstein once said, “The more I learn, the more I realize how much I don’t know.” Whether you are a beginner or a professional, a student or a teacher, a researcher or a practitioner, there is always something new to learn and create in machine learning.