paint-brush
Detailed Results of the Foundation Benchmarkby@benchmarking

Detailed Results of the Foundation Benchmark

tldt arrow

Too Long; Didn't Read

Table 5 presents a detailed performance assessment of various audio-language models on the foundation benchmark. The results indicate that, except for binary-choice tasks like Speaker Gender Recognition and Synthesized Voice Detection, all other tasks require a selection from four options, establishing a baseline accuracy of 25% for random choices. Metrics close to these baselines suggest a lack of proficiency in the respective tasks.
featured image - Detailed Results of the Foundation Benchmark
Benchmarking in Business Technology and Software HackerNoon profile picture

Authors:

(1) Qian Yang, Zhejiang University, Equal contribution. This work was conducted during Qian Yang’s internship at Alibaba Group;

(2) Jin Xu, Alibaba Group, Equal contribution;

(3) Wenrui Liu, Zhejiang University;

(4) Yunfei Chu, Alibaba Group;

(5) Xiaohuan Zhou, Alibaba Group;

(6) Yichong Leng, Alibaba Group;

(7) Yuanjun Lv, Alibaba Group;

(8) Zhou Zhao, Alibaba Group and Corresponding to Zhou Zhao ([email protected]);

(9) Yichong Leng, Zhejiang University

(10) Chang Zhou, Alibaba Group and Corresponding to Chang Zhou ([email protected]);

(11) Jingren Zhou, Alibaba Group.

Abstract and 1. Introduction

2 Related Work

3 AIR-Bench and 3.1 Overview

3.2 Foundation Benchmark

3.3 Chat Benchmark

3.4 Evaluation Strategy

4 Experiments

4.1 Models

4.2 Main Results

4.3 Human Evaluation and 4.4 Ablation Study of Positional Bias

5 Conclusion and References

A Detailed Results of Foundation Benchmark

A Detailed Results of Foundation Benchmark

In Table 5, we delineate the performance assessment for each model across the various tasks on the foundation benchmark. With the exception of Speaker Gender Recognition and Synthesized Voice Detection, which are binary-choice tasks, all other tasks necessitate a selection from four options. As such, a random selection in the Speaker Gender Recognition and Synthesized Voice Detection datasets would theoretically achieve an accuracy of 50%, while the expected accuracy for random choices across the remaining datasets stands at 25%. Consequently, any performance metrics that approximate these random baselines are indicative of an absence of discernible proficiency in the respective tasks.


Table 5: The accuracy of each model across all tasks in the foundation benchmark.


This paper is available on arxiv under CC BY 4.0 DEED license.