Mitigating AI Bias in Comedy: Community Alignment & Provenance

Written by ethnocomputing | Published 2026/03/10
Tech Story Tags: ai | ai-bias | ai-in-comedy | ai-creativity | community-based-llm-alignment | data-provenance-for-artists | artist-led-data-governance | collective-constitution-ai

TLDRDiscover three key strategies to align LLMs with creative needs: community-led alignment, relational context integration, and artist-owned data governance. Learn how comedians are reclaiming the AI training process.via the TL;DR App

Abstract and 1. Introduction

  1. Methods
  2. Quantitative Results and Creativity Support Index
  3. Qualitative Results from Focus Group Discussions
  4. Discussion
  5. Mitigations and Conclusion and Acknowledgments
  6. Ethical Guidance References

A. Related Work on Computational Humour, AI and Comedy

B. Participant Questionaire

C. Focus

6 MITIGATIONS AND CONCLUSION

Our mixed-method study on AI for creative writing consulted the real domain experts in language subtleties: comedians. Building on their opinions, we suggest the following avenues to make the writing tools work for them (if they wish to do so). First, for the artist communities to conceptualise and contribute towards building LLMs that are aligned with the intended audiences instead of being globally aligned. Open-source repositories of user-contributed LLMs[12] could be adapted for artists’ specific needs. Second, to integrate necessary relational context when training and deploying such LLMs, for instance by describing the context in which the text is produced and used, and by empowering the artists to make decisions about how to moderate the LLM outputs. Third, to allow the comedians to reclaim ownership of the tools and the processes for gathering and curating training data for these models, taking as inspiration data governance used for training some open-source LLMs [111] and providing artists with transparency about data provenance [72, 96]. These are thorny open questions, left to the readers to address.

ACKNOWLEDGMENTS

The authors wish to thank Renée Shelby, Jackie Kay, Mark Diaz, Nick Swanson, Remi Denton, Rida Qadri, Maribeth Rauh, Iason Gabriel, Tom Everitt, Merrie Morris, Canfer Akbulut, Nahema Marchal, Boxi Wu, Antonia Paterson, and Ed Hirst, for helpful discussions and suggestions, as well as Shereen Ashraf, Tom Rodenby, Rob Willoughby, Nasem Shalbak, Alyssa Pierce, Robert Ogley, Lorrayne Bennett, Jon Small and Vijay Bolina for support in the study.

Authors:

(1) Piotr W. Mirowski∗, Google DeepMind London, UK ([email protected]);

(2) Juliette Love∗, Google DeepMind London, UK ( [email protected]);

(3) Kory Mathewson, Google DeepMind Montréal, QC, Canada ([email protected]);

(4) Shakir Mohamed, Google DeepMind London, UK ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[12] Examples of open-source repositories of user-contributed LLMs include https;//huggingface.co/models and https;//cworkd.ai.


Written by ethnocomputing | Exploring culture & computing intersection for inclusive innovation & diverse perspectives.
Published by HackerNoon on 2026/03/10