Authors:
(1) Tony Lee, Stanford with Equal contribution;
(2) Michihiro Yasunaga, Stanford with Equal contribution;
(3) Chenlin Meng, Stanford with Equal contribution;
(4) Yifan Mai, Stanford;
(5) Joon Sung Park, Stanford;
(6) Agrim Gupta, Stanford;
(7) Yunzhi Zhang, Stanford;
(8) Deepak Narayanan, Microsoft;
(9) Hannah Benita Teufel, Aleph Alpha;
(10) Marco Bellagente, Aleph Alpha;
(11) Minguk Kang, POSTECH;
(12) Taesung Park, Adobe;
(13) Jure Leskovec, Stanford;
(14) Jun-Yan Zhu, CMU;
(15) Li Fei-Fei, Stanford;
(16) Jiajun Wu, Stanford;
(17) Stefano Ermon, Stanford;
(18) Percy Liang, Stanford.
Author contributions, Acknowledgments and References
To evaluate the 12 aspects (§3), we curate diverse and practical scenarios. Table 2 presents an overview of all the scenarios and their descriptions. Each scenario is a set of textual inputs and can be used to evaluate certain aspects. For instance, the “MS-COCO” scenario can be used to assess the alignment, quality, and efficiency aspects, and the “Inappropriate Image Prompts (I2P)” scenario [8] can be used to assess the toxicity aspect. Some scenarios may include sub-scenarios, indicating the sub-level categories or variations within them, such as “Hate” and “Violence” within I2P. We curate these scenarios by leveraging existing datasets and creating new prompts ourselves. In total, we have 62 scenarios, including the sub-scenarios.
Notably, we create new scenarios (indicated with “New” in Table 2) for aspects that were previously underexplored and lacked dedicated datasets. These aspects include originality, aesthetics, bias, and fairness. For example, to evaluate originality, we develop scenarios to test the artistic creativity of these models with textual inputs to generate landing pages, logos, and magazine covers.
This paper is available on arxiv under CC BY 4.0 DEED license.