paint-brush
Study Finds Generative AI Appears Less Intelligent Yet More Credible Than Humans in Science Writingby@textgeneration

Study Finds Generative AI Appears Less Intelligent Yet More Credible Than Humans in Science Writing

by Text GenerationNovember 26th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

A study from Michigan State University evaluated the effectiveness of using generative AI to simplify science communication and enhance public trust in science.
featured image - Study Finds Generative AI Appears Less Intelligent Yet More Credible Than Humans in Science Writing
Text Generation HackerNoon profile picture

Author:

(1) David M. Markowitz, Department of Communication, Michigan State University, East Lansing, MI 48824.

Editor's note: This is part 1 of 10 of a paper evaluating the effectiveness of using generative AI to simplify science communication and enhance public trust in science. The rest of the paper can be accessed via the table of links below.


Abstract

This paper evaluated the effectiveness of using generative AI to simplify science communication and enhance public trust in science. By comparing lay summaries of journal articles from PNAS, yoked to those generated by AI, this work assessed linguistic simplicity across such summaries and public perceptions. Study 1a analyzed simplicity features of PNAS abstracts (scientific summaries) and significance statements (lay summaries), observing that lay summaries were indeed linguistically simpler, but effect size differences were small. Study 1b used GPT-4 to create significance statements based on paper abstracts and this more than doubled the average effect size without fine-tuning. Finally, Study 2 experimentally demonstrated that simply-written GPT summaries facilitated more favorable public perceptions of scientists (their credibility, trustworthiness) than more complexly-written human PNAS summaries. AI has the potential to engage scientific communities and the public via a simple language heuristic, advocating for its integration into scientific dissemination for a more informed society.

Significance Statement

Across several studies, this paper revealed that generative AI can simplify science communication, making complex concepts feel more accessible and enhancing public trust in scientists. By comparing traditional scientific summaries from the journal PNAS to AI-generated summaries of the same work, this research demonstrated that AI can produce even simpler and clearer explanations of scientific information that are easier for the general public to understand. Importantly, these simplified summaries can improve perceptions of scientists’ credibility and trustworthiness as experimentally demonstrated in this work. With small, language-level changes, AI has the potential to be effective science communicators and its possible deployment at scale makes it an appealing technology for clearer science communication.

Science Written by Generative AI is Perceived as Less Intelligent, but More Credible and Trustworthy than Science Written by Humans

Scientific information is essential for everyday decision-making. People often use science, or information communicated by scientists, to make decisions in medical settings (1), environmental settings (2), and many others (3). For people to use such information effectively, however, they must have some amount of scientific literacy (4) or at least trust those who communicate scientific information to them (5). Overwhelming evidence suggests these ideals are not being met, as trust in scientists and scientific evidence have decreased over time for nontrivial reasons (e.g., distrust in institutions, political polarization, among many others) (6–8). The public’s decreasing trust in scientists and scientific information is unrelenting, which requires more thoughtful research into countermeasures and possible remedies that can be scaled across people and populations.


Several remedies have been proposed to make science more approachable, and to improve the perception of scientists. For example, some propose that being transparent about how research was conducted and disclosing possible conflicts of interest (9, 10), having scientists engage with the public about their work (11), or improving scientists’ ability to tell a compelling story (12) can increase public trust. While there is no panacea for dwindling public trust in science and scientists, extant evidence suggests this is an issue worth taking seriously, and it is imperative that scientists discover ways to best communicate their work with the hope of improving how people perceive them and their research.


Against this backdrop, the current work argues that how one’s science is communicated matters, and that language-level changes to scientific summaries can significantly improve perceptions of a scientist. Critically, the evidence in this paper suggests scientists may not be the best messengers to communicate their work if one goal is to communicate science simply. In other words, it may be difficult for experts to write for non-experts. Instead, as the current research demonstrates, generative AI can effectively summarize scientific writing in ways that are more approachable for lay readers, and such tools can be scaled to improve science communication efforts at a system level.


This paper is available on arxiv under CC BY 4.0 DEED license.