In this article, we will learn about GNNs and its structure as well as its applications
Deep learning has changed the way we process data using the increasing computational “cheap” power(Moore’s law) to solve real-world problems and accomplish some cognitive tasks that our brains do almost effortlessly such as image classification, natural language processing, vídeo processing and etc.
But most of these tasks have data or structure that is typically represented in the Euclidean space. However, we are witnessing a rise of problems where data generated is from the non-Euclidean domain and are better represented as graphs with complex relationships and interdependency between objects.
The complexity of graph data has imposed significant challenges for both classical ML and modern DL algorithms. Recently, many studies extending DL approaches for graph data have emerged.
In a great paper released late last year entitled: “A Comprehensive Survey on Graph Neural Networks”[1] the authors proposed a new taxonomy to divide the state-of-the-art graph neural networks into 4 main categories:
In the same paper the authors gave examples of applications of GNNs as follows:
Take a minute, and just imagine if we gather the structure of all “approved” drugs or all of them in general and trained a GNNs to learn the molecular pattern/structure of drugs and it learned to identify whether a molecule found here or in space(counting on you Elon Musk!!!) is a drug or not. I believe the results could be rather surprising and could help us discover or rediscover drugs that can cure diseases that today are considered incurable. If we think about it most diseases and even death itself could be seen as a technical problem. How can some live hundreds of years and some just a few decades? Can we prolong our lives indefinitely?
I know that last statement could bring a heated argument in the comment section, but I’m not married to any idea, please share your thoughts. Furthermore, I would like to let you know that I’m a Christian and I believe in GOD.
But how cool would that be!!! We might invent our own version of the Lazarus pit since we are slowly but surely blurring the line between sci-fi and reality.
GNNs have similar properties as its DL counterparts, capable of doing regression, classification, generating data for empty nodes, edges and so much more that we are yet to find out. The field is still at its beginning stages, who knows what we could do in a few years and a couple of thousand papers from now.
For me, the most interesting applications are for recommendation and health, with health being the no. 1.
Now, graphs can be irregular, a graph may have a variable size of unordered nodes, and nodes from a graph may have a different number of neighbours, resulting in some important operations (e.g., convolutions) being easy to compute in the image domain, but difficult to apply to the graph domain. Furthermore, a core assumption of existing machine learning algorithms is that instances are independent of each other. This assumption no longer holds for graph data because each instance (node) is related to others by links of various types, such as citations, friendships, and interactions. But this is not a handicap, because recently we have seen an increasing interest in extending or perhaps I should say porting of deep learning approaches to the graph domain, specifically designed to work with graph data.
Convolutional Neural Networks(ConvNets) are out of the scope of this article but if you want to know more please click here.
Recurrent graph neural networks (RecGNNs) mostly are pioneer works of graph neural networks. RecGNNs aim to learn node representations with recurrent neural architectures. They assume a node in a graph constantly exchanges information/message with its neighbours until a stable equilibrium is reached. RecGNNs are conceptually important and inspired later research on convolutional graph neural networks. In particular, the idea of message passing is inherited by spatial based convolutional graph neural networks.
Convolutional graph neural networks (ConvGNNs) generalize the operation of convolution from grid data to graph data. The main idea is to generate a node ∨’s representation by aggregating its own features X∨ and neighbours’ features X∪, where ∪ ∈ N(∨).
Different from RecGNNs, ConvGNNs stack multiple graph convolutional layers to extract high-level node representations. ConvGNNs play a central role in building up many other complex GNN models. Figure 2a shows a ConvGNN for node classification. Figure 2b demonstrates a ConvGNN for graph classification.
Graph autoencoders (GAEs) are unsupervised learning frameworks that encode nodes/graphs into a latent vector space and reconstruct graph data from the encoded information. GAEs are used to learn network embeddings and graph generative distributions. For network embedding, GAEs learn latent node representations through reconstructing graph structural information such as the graph adjacency matrix. For graph generation, some methods generate nodes and edges of a graph step by step while other methods output a graph all at once. Figure 2c presents a GAE for network embedding.
Spatial-temporal graph neural networks (STGNNs) aim to learn hidden patterns from spatial-temporal graphs, which become increasingly important in a variety of applications such as traffic speed forecasting, driver manoeuvre anticipation, and human action recognition. The key idea of STGNNs is to consider spatial dependency and temporal dependency at the same time. Many current approaches integrate graph convolutions to capture spatial dependency with RNNs or CNNs to model the temporal dependency. Figure 2d illustrates an STGNN for spatial-temporal graph forecasting.
(FIG. 2 credits to paper [1])
With the graph structure and node content information as inputs, the outputs of GNNs can focus on different graph analytics tasks with one of the following mechanisms:
Training Frameworks. Many GNNs (e.g., ConvGNNs) can be trained in a (semi-) supervised or purely unsupervised way within an end-to-end learning framework, depending on the learning tasks and label information available at hand.
Semi-supervised learning for node-level classification
Given a single network with partial nodes being labelled and others remaining unlabeled, ConvGNNs can learn a robust model that effectively identifies the class labels for the unlabeled nodes. To this end, an end-to-end framework can be built by stacking a couple of graph convolutional layers followed by a softmax layer for multi-class classification.
Supervised learning for graph-level classification
Graph-level classification aims to predict the class label(s) for an entire graph. The end-to-end learning for this task can be realized with a combination of graph convolutional layers, graph pooling layers, and/or readout layers. While graph convolutional layers are responsible for exacting high-level node representations, graph pooling layers play the role of downsampling, which coarsens each graph into a sub-structure each time.
A readout layer collapses node representations of each graph into a graph representation. By applying a multi-layer perceptron and a softmax layer to graph representations, we can build an end-to-end framework for graph classification. An example is given in Fig 2b.
Unsupervised learning for graph embedding
When no class labels are available in graphs, we can learn the graph embedding in a purely unsupervised way in an end-to-end framework. These algorithms exploit edge-level information in two ways. One simple way is to adopt an autoencoder framework where the encoder employs graph convolutional layers to embed the graph into the latent representation upon which a decoder is used to reconstruct the graph structure.
Another popular way is to utilize the negative sampling approach which samples a portion of node pairs as negative pairs while existing node pairs with links in the graphs are positive pairs. Then a logistic regression layer is applied to distinguish between positive and negative pairs.
There is so much more to this than I could possibly fit in this article. I’m preparing a notebook to give you a brief intro to GNNs code and some of the top frameworks out there for building them and visualizing graph data.
Link to the Colab notebook
https://colab.research.google.com/drive/16T8JY_XG949m-dLSm-AMJvlXwN-VdjUK
If you want to learn more about Graph Neural Networks check out the article.
Thank you for reading, this has been a fun discovery for me too, these kinds of algorithms fascinate me for their raw potential to consume GB or even PB of data and uncover hidden patterns far better than us. Please leave your comments.