paint-brush
Supervised Learning Based Real-Time Adaptive Beamforming On-board: Supervised Learning for Adaptiveby@transcompiler

Supervised Learning Based Real-Time Adaptive Beamforming On-board: Supervised Learning for Adaptive

tldt arrow

Too Long; Didn't Read

While software-defined payloads are very promising, their effective utilization requires advanced RRM techniques to optimize resource allocation in real-time.
featured image - Supervised Learning Based Real-Time Adaptive Beamforming On-board: Supervised Learning for Adaptive
Transcompiler: Learn How to Translate Code HackerNoon profile picture

This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Flor Ortiz, University of Luxembour;

(2) Juan A. Vasquez-Peralvo, University of Luxembour;

(3) Jorge Querol, University of Luxembour;

(4) Eva Lagunas, University of Luxembour;

(5) Jorge L. Gonzalez Rios, University of Luxembour;

(6) Marcele O. K. Mendonc¸a, University of Luxembour;

(7) Luis Garces, University of Luxembour;

(8) Victor Monzon Baeza, University of Luxembour;

(9) Symeon Chatzinotas, University of Luxembou.

IV. SUPERVISED LEARNING FOR ADAPTIVE BEAMFORMING

The design of our beamforming matrix allows us to divide the 36-element matrix into four distinct 18-element sections for more detail see [8]. Consequently, our goal is reduced to predicting the 18 × 18 matrix, comprising a total of 324 elements.


For this study. we present two approaches based on supervised learning: approach 1, based on Multi-Label Classification Neural Network, and approach 2 based on Clustering and Classification Neural Network.


Approach 1 employs a multi-label classification neural network for beamforming, as shown in Algorithm 1. This method is designed to predict the activation state of individual elements within the beamforming matrix. The primary goal is determining which elements should be active (assigned a value of 1) and which should be inactive (assigned a value of 0) based on beam-related input features.


The neural network takes eight beam-related input features, such as beam width in azimuth, beam width in elevation, minimum SLL in azimuth, minimum SLL in elevation, equivalent isotropic radiated power (EIRPb), azimuth, elevation, and number of elements. The output layer consists of 324 units, corresponding to the 324 elements of the 18times18 section of the beamforming matrix. Each unit in the output layer uses a sigmoid activation function, which produces values between 0 and 1, indicating the probability that each element is active.


The loss function used is the binary cross-entropy, which calculates the error of each element individually and then aggregates them to evaluate the overall model performance. Model performance is evaluated by the multi-label accuracy or the f1score, which measures the percentage of correct element activations predicted for each input.


The neural network architecture for Approach 1 is constructed accordingly, with input, hidden, and output layers. Once trained, this model can predict the activation state of the elements in the beamforming matrix based on the characteristics of the input beams.


Approach 2 employs a two-step process involving clustering and classification to determine the appropriate beamforming matrix as explained in Algorithm 2. Initially, a K-meansbased clustering algorithm is used to group similar sets of input variables related to the beamforming matrix design. This clustering operation transforms the problem into a binary classification scenario, where each class corresponds to a specific predefined beamforming matrix.


The neural network used for classification is designed to learn and solve this binary classification task by analyzing the characteristics of the input variables and assigning them to the appropriate pre-trained matrix. This approach aims to optimize the selection of beamforming matrices based on the input variables, improving the system’s overall performance.