paint-brush
Generative AI and Contextual Confidence: Challenges to Contextual Confidence from Generative AIby@escholar
194 reads

Generative AI and Contextual Confidence: Challenges to Contextual Confidence from Generative AI

tldt arrow

Too Long; Didn't Read

An arxiv paper about maintaining contextual confidence amidst advances in generative AI, offering strategies for mitigation.
featured image - Generative AI and Contextual Confidence: Challenges to Contextual Confidence from Generative AI
EScholar: Electronic Academic Papers for Scholars HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Shrey Jain, Microsoft Research Special Projects;

(2) Zo¨e Hitzig, Harvard Society of Fellows & OpenAI;

(3) Pamela Mishkin, OpenAI.

Abstract & Introduction

Challenges to Contextual Confidence from Generative AI

Strategies to Promote Contextual Confidence

Discussion, Acknowledgements and References

2 Challenges to Contextual Confidence from Generative AI

2.1 Challenges to Identifying Context

Challenge I#1: Impersonate the identity of an individual without their consent and in a scalable, automated, and targeted way.


Generative AI is now able to impersonate specific individuals with high fidelity, in text, audio, and even video. Malicious actors can impersonate specific individuals without their consent, in order to deceive others. Doing so undermines contextual confidence by impairing the target’s ability to identify context, particularly the “who” that they are communicating with. Attackers might leverage high-fidelity impersonations in social engineering attacks, deceiving victims to extract financial resources [3, 4, 25, 26]. They may also use such impersonations to make identity theft and identity fraud cheaper and more convincing.


Challenge I#2: Imitate members of specific social or cultural groups.


Just as generative AI models can convincingly imitate specific individuals, they can also convincingly imitate members of specific social or cultural groups. This possibility for imitation presents opportunities for overcoming various forms of entrenched discrimination alongside new challenges to the identification of context. We focus here on the challenges, while acknowledging that this issue’s complexity extends far beyond the scope of this paper.[10] For example, when indigenous activists discovered that Whisper (OpenAI’s speech recognition tool) could translate Maori, having been trained on thousands of hours of Maori language, they voiced concerns [28].[11] Before language models could imitate a specific social or cultural group with unprecedented fidelity, it was much harder for outsiders to present themselves as members as a specific social group. So, it was easier for members of the group to be stewards of their own culture, with some ability to steer internal and external perceptions of their group toward an authentic understanding. By contrast, outsiders who present themselves as members of the group using the language model may not be attuned to biases in the model, and may thus skew internal and external perceptions, reinforcing stereotypes and distorting the imitated culture.


Challenge I#3: Create mimetic models.


A mimetic AI model is a model that is trained to act as a particular individual in specified scenarios. In the near future, many may delegate their communications via email and text to mimetic models (autocomplete, autocorrect, and other “smart compose” technologies serve as early versions of this). It will soon be possible to send a mimetic model to a meeting to participate on an individual’s behalf. As it stands, there are few norms governing their use and disclosure of their use, and yet mimetic models will soon disrupt social norms and expectations. Mimetic models can erode our ability to identify context, by obscuring “who” is communicating, and “how.” In addition, they may strain our ability to protect context – as information received by a mimetic model may be used in training or feedback by the model itself or by an organization that trained the model in the first place.[12]


Challenge I#4: Imitate another AI model.


As specific industries and organizations develop specialized AI models for particular use cases, it will be possible for malicious actors to imitate one AI model with another via social engineering strategies. This possibility may be particularly threatening in high-stakes industries like healthcare. For example, a malicious actor connected with a pharmaceutical company could imitate or infiltrate a clinician-approved model to push a specific drug. Such possibilities present new challenges to the identification of context – how can users be sure that they are interacting with the intended model, and not a fraudulent one installed in its place?


Challenge I#5: Falsely represent grassroots support or consensus (astroturfing).


Generative AI makes it cheaper to generate and disseminate messages in support of or in opposition to particular political opinions or consumer products and services. Generative AI can help to make these messages appear as if they come from a wide range of individuals, creating the appearance of grassroots consensus. A recent study showed that state legislators couldn’t tell the difference between AI-generated and human-written emails, illustrating how generative AI may destabilize political communication [6]. Astroturfing obscures the “who” and “why” of communication – making it appear as though many disparate individuals came to a platform to express their genuine opinion, when in fact, there may be a single actor behind the messages, who is disseminating the messages with the specific intention of convincing people to take on a particular position or to take a particular action. Astroturfing, especially when powered by generative AI, can harm consumers, amplify social and political polarization, suppress dissent, and strengthen authoritarian regimes.[13]

2.2 Challenges to Protecting Context

Challenge P#1: Disseminate content in a scalable, automated, and targeted way.


Generative AI can produce and disseminate content on a large scale, automating processes that would otherwise require extensive human effort. It can create hundreds of variations of a given article, each tailored to a different demographic or psychographic profile, and send these out in a fraction of the time it would take human writers and even the prior generation of (non-generative) AI tools like predictive analytics. Any particular statement, image, or video can be lifted out of its original context and incorporated into new content that may be specifically designed to elicit a particular kind of response from its target audience. This capability of generative AI can turbocharge the generation and spread of disinformation, misinformation, propaganda and fraud.


Challenge P#2: Fail to accurately represent the origin of content.


A defining feature of generative AI is that it is a black box: it recombines vast amounts of data from a wide range of sources, without tracing exactly how it is drawing on different sources in its provision of new content. It has thus been built in a way that makes it difficult to trace where content originated – the very point of generative AI is to synthesize data across many original contexts for reuse in a new context. When participants cannot be sure that their communications and other content will be attributed to them, they may be wary of communicating or producing content in the first place. Indeed, the origin-less nature of generative AI has initiated heated conversations about intellectual property and fair use.[14]


Challenge P#3: Misrepresent members of specific social or cultural groups.


Generative AI models are trained on data that includes cultural expressions from a variety of groups. They are also trained on datasets that include stereotypes of certain groups, as well as instances of cultural appropriation. When the model recombines these elements into new content, it may replicate harmful stereotypes, engage in cultural appropriation, or fail to respect specific traditions. If the model is used in a commercial setting, it may also enable an outsider to profit from the intellectual or artistic heritage of another group without their consent. These issues closely relate to the issues discussed in P#2 and I#2 and yet are sufficiently distinct to warrant their own discussion. Consider again the discussions about Whisper’s Maori translations. Indigenous activists have not only raised concerns about how Whisper may lead to cultural distortion (discussed in I#2), but have also raised concerns about cultural appropriation, and the extent to which outsiders might misrepresent or misuse cultural narratives through generative AI.


Challenge P#4: Leak or infer information about specific individuals or groups without their consent.


As generative AI models engage users and synthesize data in unprecedented ways, they may leak or infer data without the consent of data subjects. This may occur because AI models are trained on datasets that include sensitive information and produce outputs that inadvertently reveal these details. Or it may be the case that generative models, with their ability to systematically analyze disparate sources of data, create an ability to re-identify individuals or groups who contributed to training data, or to infer private information through educated guesses. These possibilities for leakage or inference of sensitive information magnify the threats of identity theft and fraud, and also may enable sophisticated social engineering or blackmail schemes [3, 4].


Challenge P#5: Pollute the data commons.


As generative AI models continue to blend and repurpose data outside their explicitly intended contexts, they feed the “data commons” with their output. To the extent that future generations of AI models are trained on these updated “data commons,” they will be even more divorced from context than the original models. This dynamic may lead to “model collapse,” a situation where AI inaccurately represents or loses the ability to represent the complexity or nuances of specific contexts [32]. As more people come to rely on generative AI models to communicate, the threat of model collapse becomes a threat to effective communication.


Challenge P#6: Train a context-specific model without context-specific restrictions.


Organizations, individuals and industries are training context-specific models. If these contextspecific models are not accompanied by corresponding restrictions, there will be new challenges to the protection of context. Information used to train the context-specific model may be reused or recombined in a context other than the one intended, leaking or enabling third-party access to sensitive information. Some models used for narrow applications – such as a diagnostic tool for clinicians – may produce outputs that can only be correctly interpreted by a narrow class of users. If a user outside this narrow class accesses the outputs of the diagnostic model, they may be badly misled in a way that could produce harm to themselves or others.

2.3 Heterogeneity in Impacts of Generative AI on Contextual Confidence

Even in face-to-face communications, different populations vary in their capacity to identify and protect the context of their communications. For instance, people speaking a second language may not pick up on the same nuances and details as someone speaking in their first language. Those with disabilities have different forms of access to – and understanding of – contextual information than those without.


Generative AI can help to overcome some of these obstacles to contextual confidence in face-toface and digital communication. For instance, for non-native speakers, the technology can generate translations or paraphrases that consider cultural nuances, going beyond literal translation. These tools can ensure that the intended meaning is conveyed without the loss of essential details. For those with hearing impairments, generative AI can generate real-time captions or transcriptions that not only convert speech to text but also identify and highlight key contextual elements, such as the speaker’s tone or the mood of a conversation. For visually impaired individuals, generative AI can produce descriptive texts of images, photos, or video content, capturing more than just objects but also their interrelationships, the emotions they might evoke, and other subtle context markers. By catering to these specific needs, generative AI ensures that each individual can engage with information in a way that is both meaningful and contextually rich.


At the same time, generative AI may exacerbate other varied challenges to contextual confidence, and create new ones for different populations.[15] Populations with limited digital literacy or who might otherwise struggle to navigate shifting technological landscapes will be more vulnerable to the challenges we’ve discussed. Scams and fraudulent schemes disproportionately target elders [34], and there are already examples of attackers using generative AI for these schemes [35]. Social norms that emphasize trust over skepticism can further increase susceptibility, as can economic conditions where promises of quick financial gains are particularly tempting. Veterans and military retirees are at higher risk of targeted scams, often through purported benefits or entitlements [36]. Those who are already disproportionate targets of harassment online – including women, adolescents and the LGBTQ+ community – may continue to be disproportionately impacted by the automated production of harmful content [19].


Although we will not enumerate in further detail how different populations are more or less affected by the challenges we’ve described, we acknowledge the importance of thinking through the heterogeneous risks so that contextual confidence strategies can be properly matched to the groups under threat. Some groups are particularly vulnerable in the present and immediate future – such as older populations – and considering the usability of contextual confidence strategies for specific populations is critical.




[10] Consider, for example, the AI startup erasing call center worker’s accents so that they sound like white Americans [27]. Is this practice aiding cultural biases, or fighting them?


[11] Here, we discuss concerns related to the identification of context raised by this incident, but other concerns centered on data abuse and cultural appropriation, issues which are broadly about the protection of context (see Challenge P#3 below).


[12] For an overview of ethical issues related to the use of mimetic models, see [29].


[13] [30] provides a detailed survey of how generative AI can power influence operations.


[14] For an overview of the current conversations around generative AI and fair use, see [31].


[15] See [33] for a discussion of recommendations on how generative AI can be developed via an inclusive “Design for All” approach.