This story draft by @configuring has not been reviewed by an editor, YET.
Authors:
(1) Limeng Zhang, Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, Australia;
(2) M. Ali Babar, Centre for Research on Engineering Software Technologies (CREST), The University of Adelaide, Australia.
1.1 Configuration Parameter Tuning Challenges and 1.2 Contributions
3 Overview of Tuning Framework
4 Workload Characterization and 4.1 Query-level Characterization
4.2 Runtime-based Characterization
5 Feature Pruning and 5.1 Workload-level Pruning
5.2 Configuration-level Pruning
7 Configuration Recommendation and 7.1 Bayesian Optimization
10 Discussion and Conclusion, and References
Given the complex configuration space and the diversity of workloads, employing pruning techniques to reduce workload running time and configuration search space emerges as a natural approach in addressing these complexities. In this section, we aim to provide direction for future practitioners and researchers on improving data collection efficiency and training efficiency through various pruning strategies. Specifically, we classify the pruning techniques into two levels: workload-level and configuration space. Regarding the workload level, we categorize it into two directions: eliminating redundant queries and workload feature reduction techniques. For the configuration level, we present the existing feature reduction methods applied in the state-of-the-art tuning methods, mainly focusing on feature projection, importance ranking, or feature clustering.
Moreover, future researchers and practitioners can also try to explore different dimensionality reduction techniques tailored to specific data characteristics, as outlined in the survey by Hou et al. [31]. Furthermore, advancements in high-dimensional data technology offer opportunities to enhance the performance of feature pruning methods. For instance, Yang et al. [32] proposed an innovative variant of LASSO, named Efficient Tuning of Lasso (ET-Lasso), which focuses on ensuring feature selection consistency. This method has demonstrated effectiveness in efficiently selecting active features contributing to the response, achieved by integrating permuted features as pseudo-features within linear models.
This paper is available on arxiv under CC BY 4.0 DEED.