paint-brush
A Practical Approach to Novel Class Discovery in Tabular Databy@dataology
526 reads
526 reads

A Practical Approach to Novel Class Discovery in Tabular Data

Too Long; Didn't Read

This paper addresses Novel Class Discovery (NCD) in tabular data without prior knowledge of novel classes, presenting methods that avoid overfitting by tuning hyperparameters through an adapted k-fold cross-validation process. It introduces a simple deep NCD model, along with adapted k-means and Spectral Clustering algorithms, demonstrating their effectiveness through extensive experiments on seven tabular datasets.
featured image - A Practical Approach to Novel Class Discovery in Tabular Data
Dataology: Study of Data in Computer Science HackerNoon profile picture

Authors:

(1) Troisemaine Colin, Department of Computer Science, IMT Atlantique, Brest, France., and Orange Labs, Lannion, France;

(2) Reiffers-Masson Alexandre, Department of Computer Science, IMT Atlantique, Brest, France.;

(3) Gosselin Stephane, Orange Labs, Lannion, France;

(4) Lemaire Vincent, Orange Labs, Lannion, France;

(5) Vaton Sandrine, Department of Computer Science, IMT Atlantique, Brest, France.

Abstract and Intro

Related work

Approaches

Hyperparameter optimization

Estimating the number of novel classes

Full training procedure

Experiments

Conclusion

Declarations

References

Appendix A: Additional result metrics

Appendix B: Hyperparameters

Appendix C: Cluster Validity Indices numerical results

Appendix D: NCD k-means centroids convergence study

Abstract

The problem of Novel Class Discovery (NCD) consists in extracting knowledge from a labeled set of known classes to accurately partition an unlabeled set of novel classes. While NCD has recently received a lot of attention from the community, it is often solved on computer vision problems and under unrealistic conditions. In particular, the number of novel classes is usually assumed to be known in advance, and their labels are sometimes used to tune hyperparameters. Methods that rely on these assumptions are not applicable in real-world scenarios. In this work, we focus on solving NCD in tabular data when no prior knowledge of the novel classes is available. To this end, we propose to tune the hyperparameters of NCD methods by adapting the k-fold cross-validation process and hiding some of the known classes in each fold. Since we have found that methods with too many hyperparameters are likely to overfit these hidden classes, we define a simple deep NCD model. This method is composed of only the essential elements necessary for the NCD problem and performs impressively well under realistic conditions. Furthermore, we find that the latent space of this method can be used to reliably estimate the number of novel classes. Additionally, we adapt two unsupervised clustering algorithms (k-means and Spectral Clustering) to leverage the knowledge of the known classes. Extensive experiments are conducted on 7 tabular datasets and demonstrate the effectiveness of the proposed method and hyperparameter tuning process, and show that the NCD problem can be solved without relying on knowledge from the novel classes.


Keywords: novel class discovery, clustering, tabular data, open world learning, transfer learning

1 Introduction

Recently, remarkable progress has been achieved in supervised tasks, in part with the help of large and fully labeled sets such as ImageNet [1]. These advancements have predominantly focused on closed-world scenarios, where, during training, it is presumed that all classes are known in advance and have some labeled examples. However, in practical applications, obtaining labeled instances for all classes of interest can be a difficult task due to factors such as budget constraints or lack of comprehensive information. Furthermore, for models to be able to transfer learned concepts to new classes, they need to be designed with this in mind from the start, which is rarely the case. Yet this is an important skill that humans can use effortlessly. For example, having learnt to distinguish a few animals, a person will easily be able to recognise and “cluster” new species they have never seen before. The transposition of this human capacity to the field of machine learning could be a model capable of categorizing new products in novel categories.


This observation has led researchers to formulate a new problem called Novel Class Discovery (NCD) [2, 3]. Here, we are given a labeled set of known classes and an unlabeled set of different but related classes that must be discovered. Lately, this task has received a lot of attention from the community, with many new methods such as AutoNovel [4], OpenMix [5] or NCL [6] and theoretical studies [7, 8]. However, most of these works tackle the NCD problem under the unrealistic assumption that the number of novel classes is known in advance, or that the target labels of the novel classes are available for hyperparameter optimization [9]. These assumptions render these methods impractical for real-world NCD scenarios. To address these challenges, we propose a general framework for optimizing the hyperparameters of NCD methods where the ground-truth labels of novel classes are never used, as they are not available in real-world NCD scenarios. Furthermore, we show that the latent spaces obtained by such methods can be used to accurately estimate the number of novel classes.


We also introduce three new NCD methods. Two of them are unsupervised clustering algorithms modified to leverage the additional information available in the NCD setting. The first one improves the centroid initialization step of k-means, resulting in a fast and easy to use algorithm that can still give good results in many scenarios. The second method focuses on optimizing the parameters of the Spectral Clustering (SC) algorithm. This approach has a potentially higher learning capacity as the representation itself (i.e. the spectral embedding) is tuned to easily cluster the novel data. Finally, the last approach is a deep NCD method composed of only the essential components necessary for the NCD problem. Compared to SC, this method is more flexible in the definition of its latent space and effectively integrates the knowledge of the known classes.


While these contributions can be applied to any type of data, our work focuses on tabular data. The NCD community has focused almost exclusively on computer vision problems and, to the best of our knowledge, only one paper [9] has tackled the problem of NCD in the tabular context. However, this work required the meticulous tuning of a large number of hyperparameters to achieve optimal results. Methods designed for tabular data cannot take advantage of powerful techniques commonly employed in computer vision. Examples include convolutions, data augmentation or Self-Supervised Learning methods such as DINO [10], which have been used with great success in NCD works [11–13], thanks to their strong ability to obtain representative latent spaces without any supervision. On the other hand, tabular data methods have to rely on finely tuned hyperparameters to achieve optimal results. For this reason, we believe that the field of tabular data will benefit the most from our contributions.


By making the following contributions, we demonstrate the feasibility of solving the NCD problem with tabular data and under realistic conditions:


• We develop a hyperparameter optimization procedure tailored to transfer the results from the known classes to the novel classes with good generalization.


• We show that it is possible to accurately estimate the number of novel classes in the context of NCD, by applying simple clustering quality metrics in the latent space of NCD methods.


• We modify two classical unsupervised clustering algorithms to effectively utilize the data available in the NCD setting.


• We propose a simple and robust method, called PBN (for Projection-Based NCD), that learns a latent representation that incorporates the important features of the known classes, without overfitting on them. The code is available at https://github.com/Orange-OpenSource/PracticalNCD.


This paper is available on arxiv under CC 4.0 license.