Google Cloud Architecture for Machine Learning Algorithms in the Telecom Industry
The unprecedented growth of mobile devices, applications, and services had placed the utmost demand on mobile and wireless networking infrastructure. Rapid research and development of 5G systems have found ways to support mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience.
Moreover, inference from heterogeneous mobile data from distributed devices experiences challenges due to computational and battery power limitations. ML models employed at the edge-servers are constrained to light-weight to boost model performance by achieving a trade-off between model complexity and accuracy. Also, model compression, pruning, and quantization are largely in place.
In this blog, we try to understand the different use-cases, problems, and solutions that can be leveraged with ML as follows:
In this section, let’s look at the different use cases in the telecom industry where different ML and AI algorithms have played a significant role in network traffic prediction, customer retention, and fraud analysis.
The network and service control layer contains multi-dimension convergent management and control functions to manage and control traditional and SDN/NFV cloud networks.
AI-driven smart reasoning capability has expedited Intelligent Network Operations and Management in terms of reporting anomalies and day to day activities. Network performance data can help identify sleeping cells and trigger an automatic restart, Network Optimization (coverage optimization, capacity optimization, Massive MIMO optimization) RCA (Root Cause Analysis), and Intelligent Transmission Route Optimization and Network Strategy Optimization, etc.
Features governing Network security include:
Sentiment analysis with social media
As network operators have been using Machine Learning to infer brand impact coverage and customer sentiment, end-users social networking posts help them to monitor language patterns. They also infer different kinds of sentiments to identify trends like how to capture a new market by analyzing factors driving new customers to subscribe or when do subscribers seek out a competitor.
Customer Service Recommendation and Business Personalization
Service recommenders may also be used to boost existing services or to identify why users do not adopt some services and, in turn, suggest them value-added services based on their profile and choice. In addition, they also predict churn based on the usage patterns of past churners and changes in other usage profiles.
SVM (Support Vector Machine)-based music recommendation system is often used to extract personal user-level information, timing, location, activity records, along with musical context to suggest suitable music services.
With customer-generated network data, it is easier to automate the process of grouping customers into segments, like profiling customers based on their calling and messaging behavior.
Operators try to present product/service advertisements that are tailored to an individual, situation, and device. This type of target-advertising, when directed at the right intended customer bases, helps operators and advertisers to zero in on customers with ads that fit their needs and interests.
Customer segmentation on call-records
Different clustering techniques and classification techniques like K-means and other cluster mobile customers based on their call detail records and analyze their consumer behavior. PCA-based dimensionality reduction techniques can be used for the identification of relevant and recurrent patterns (e.g. location to identify common presence patterns) among the CDRs of a given user. Further, matrix factorization is employed to infer location preferences on sparse CDR data and generate location-based recommendations.
Customer Churn Prediction
Applications of SVM, Naive Bayes, Decision Tree, Boosting, Bagging, Random Forest are found in Customer Churn Prediction through supervised/unsupervised (clustering) techniques.
Latent Dirichlet Allocation (LDA), a generative topic modeling technique, is used to extract latent features arising from mobile Short Messaging Service (SMS) communication for automatic discovery of user interest. LDA segments the mobile SMS documents into segments, to discover topics in each segment by discovering latent features. This technique filters malicious mobile SMS communication. Topic models can effectively detect distinctive latent features to support automatic content filtering and remove security threats to mobile subscribers and operators.
Clustering to segment customer profiles requires complex multivariate time series analysis-based models, that have limitations around scalability and the ability to accurately represent temporal behavior sequences (TBS) of users. LDA model serves as the best to represent the noisy temporal behavior of mobile subscribers. Designing compact and interpretable profiles helps to relax the strict temporal ordering of user preferences.
Categorization of Deep Learning algorithms and their use cases in the Telecom Industry, Source – https://pdfs.semanticscholar.org/55c1/9610017a65319b130911651fbb2e3b552e51.pdf
The Telecom industry acknowledges several benefits of employing smart efficient Deep Learning to address automated network maintenance tasks:
Traditional ML algorithms require feature engineering. Deep learning can automatically extract high-level features from data that has a complex structure and inner correlations. Feature Engineering needs to be automated, particularly in the context of mobile networks, as mobile data is generated by heterogeneous sources, is often noisy, and exhibits non-trivial spatial/temporal patterns, whose labeling requires an outstanding human effort.
Deep Learning is capable of handling large amounts of data and control model over-fitting. Deep ML models are suited to high volumes of different types of data generated from mobile networks at a fast pace. Training traditional ML algorithms e.g., Support Vector Machine (SVM) and Gaussian Process (GP) sometimes requires storing all the data in memory, which is computationally infeasible under big data scenarios. In contrast to traditional ML models which do not scale, Stochastic Gradient Descent (SGD) employed to train NNs only requires sub-sets of data at each training step.
Traditional supervised learning is only effective when sufficient labeled data is available. However, most current mobile systems generate unlabeled or semi-labeled data, where some of the Deep Learning algorithms like Restricted Boltzmann Machine (RBM), Generative Adversarial Network (GAN), one/zero-shot learning demand wider applicability to solve telecom domain problems.
Compressive representations learned by deep neural networks can be shared across different networks/telecom providers, while this is limited or difficult to achieve in other ML paradigms (e.g., linear regression, random forest, etc.). Therefore, a single model can be trained to fulfill multiple objectives, without requiring complete model retraining for different tasks, thereby saving CPU and memory of mobile networks.
Deep Learning is effective in handing multivariate geometric mobile data, like user-location, represented by coordinates, surroundings, environment, altitude, topology, metrics, and order through dedicated Deep Learning architectures such as PointNet++ and Graph CNN.
A Hierarchical neural network is similar to conventional CNNs.
For a concentrated geographical region, applying PointNet recursively on a nested partitioning of the input point set.
Better able to capture local structures and finer details
Despite the challenges posed by Deep Learning models, emerging tools and technology make them tangible in mobile networks,
(i) Advanced On-Demand Parallel Computing Infrastructure, (ii) Distributed Scalable Machine Learning Systems, (iii) Dedicated Deep Learning libraries like Tensorflow and PyTorch, (iv) Fast Online Optimization algorithms, and (v) Fog Computing.
Deep Learning has a wide range of applications in mobile and wireless networks.
Now, let’s take a quick look at the different Deep Learning platforms available, mobile hardware supported along with its speed and mobile compatibility.
Comparison of Mobile Deep Learning Models
The figure below depicts the different components involved in building the ML platform — Network Monitoring/Optimization, Media Settlement, Advertising, Audience Orientation, Pattern Recognition, Sensor Data Mining, and Mobility Analytics
Cloud Architecture with GCP for telecom Machine Learning and AI algorithms
Network State Prediction refers to inferring mobile network traffic or performance indicators, given historical cellular measurements of EnodeB, Sector, and Carrier data. MLPs and Deep Learning LSTM-based techniques are used to predict users’ QoE, and evaluate the best-beam for transmission based on:
By leveraging sparse coding and max-pooling, semi-supervised Deep Learning models have been developed to classify received frame/packet patterns and infer the original properties of flows in a WiFi network.
Further, AI capable 5G networks aid in:
Predicting Mobile traffic at city scale
ST-DenNetFus based Deep Learning framework uses location-based ECI metrics to predict dynamically network demand (i.e. uplink and downlink throughput) in every region of a city. The ST-DenNetFus architecture captures unique properties (e.g., temporal closeness, period, and trend) from Spatio-temporal data, through various branches of dense neural networks (CNN). ST-DenNetFus enhances technicalities in network capacity estimation by introducing fusing external data sources (e.g., crowd mobility patterns, temporal functional regions, and the day of the week) that have not been considered before.
a. Estimating metro density from streaming CDR data, by using RNNs. The goal is to take the trajectory of a mobile phone user as a sequence of locations, which can then be fed to RNN-based models to handle the sequential data.
b. CDR data can also be used to study demographics, where CNN is used to predict the age and gender of mobile users.
c. CDR data is also used to predict tourists’ next locations.
d. Human activity chains generation by using an Input-Output based HMM-LSTM generative model.
RNN-based predictors significantly outperform traditional ML methods, including Naive Bayes, SVM, RF, and MLP.
Analysis of mobile data, therefore, becomes an important and popular research direction in the mobile networking domain, as the rapid emergence of IOT sensors and its data collection strategies have been able to provide a powerful solution for app-level data mining.
App-level mobile data analysis include: (i) Cloud-based computing and (ii) Edge-based computing.
In the former, mobile devices act as data collectors and messengers that constantly send data to cloud servers, via local points of access with limited data pre-processing capabilities. In Edge-based computing, pre-trained models are offloaded from the cloud to individuals. The primary applications include mobile healthcare, mobile pattern recognition, and mobile Natural Language Processing (NLP), and Automatic Speech Recognition (ASR).
Mobile Health: Wearable health monitoring devices being introduced in the market, incorporates medical sensors that capture the physical conditions of their carriers and provide real-time feedback (e.g., heart rate, blood pressure, breath status, etc.), or trigger alarms to remind users of taking medical actions.
Deep Learning-driven MobiEar assists deaf people’s awareness of emergencies by operating efficiently on smartphones.
Deep Learning-based (DL) models (CNNs and RNNs) are able to classify lifestyle and environmental traits of volunteers, different types of Human Activity Recognition with heterogeneous and high-dimensional mobile sensor data, including accelerometer, magnetometer, and gyroscope measurements. ConvLSTMs are known for fusing data gathered from multiple sensors and perform activity recognition.
Mobile motion sensors collect data via video capture, accelerometer readings, motion — Passive Infra-Red (PIR) sensing, specific actions, and activities that a human subject performs. Such models trained on the server for domain-specific tasks through federated learning, finally serve a broad range of devices.
Mobile Pattern Recognition relies on a mobile camera or other sensors to identify useful patterns.
Object Classification finds huge applications in mobile devices as devices take photos and rely on the photo-tagging process.
One such DL-based framework is the DeepCham that generates high-quality domain-aware training instances for adaptation from in-situ mobile photos. It has a distributed algorithm that identifies qualifying images stored in each mobile device for training and a user labeling process for recognizable objects identified from qualifying images using suggestions automatically generated by a generic deep model.
Mobile classifiers can also assist Virtual Reality (VR) applications, where Deep Learning object detectors are incorporated into a mobile Augmented Reality (AR) system. CNN-based frameworks do object detections for facial expression recognition when users wear head-mounted displays in the VR environment.
A lightweight Deep Learning-based object detection framework can be provided that combines spatial relations for:
Deep Learning-Driven Mobile Analytics
CNNs and RNNs are the most successful architectures in such applications as they can effectively exploit spatial and temporal correlations.
An attention candidate generator to generate the candidates, which are exactly the regularities of the mobility, and an attention selector to match the candidate vectors with the query vector, i.e., the current mobility status.
GPS records and traffic accident data are combined to understand the correlation between human mobility and traffic accidents. The design includes a stacked de-noising Auto Encoder to learn a compact representation of human mobility, and subsequently use that to predict traffic accident risk.
DBNs (Deep Belief Networks) are used extensively to sense and predict human emergency behavior mostly in case of natural disaster, through the use of GPS records.
A Deep Learning-based approach called ST-ResNet, is used to collectively forecast the inflow and outflow of crowds in each and every region of a city. The architecture of ST-ResNet (residual neural network framework) is based on unique properties of spatio-temporal data, to model the temporal closeness, period, and trend properties of crowd traffic. ST-ResNet while training on spatio-temporal factors, assigns different weights to different branches and regions, along with external factors, such as weather and day of the week.
Location-based services and applications (e.g. mobile AR, GPS) demand precise individual positioning technology satisfied by Deep Learning techniques used on both device-free and device-based localization services.
Although Deep Learning has unique advantages when addressing mobile network problems, it also has several shortcomings, which partially restrict its applicability in this domain. Specifically:
In this blog, we discussed different traditional vs Deep Learning algorithms, DL-based architectures, their pros and cons, and applications in the telecom industry. We also explored the data ingestion, categorization, and model deployment architecture in production. We looked at the recent advances in ML driver mobile-app development (in object detection, speaker identification, emotion recognition, stress detection, and ambient scene analysis), in-built technologies to sustain limited mobile batteries by building memory-energy efficient apps, and model compression techniques.
Create your free account to unlock your custom reading experience.