paint-brush
How to Articulate Output Constraints to LLMSby@structuring

How to Articulate Output Constraints to LLMS

by StructuringMarch 19th, 2025
Read on Terminal Reader
tldt arrow

Too Long; Didn't Read

An overarching observation is that respondents preferred using GUI to specify low-level constraints and natural language to express high-level constraints.
featured image - How to Articulate Output Constraints to LLMS
Structuring HackerNoon profile picture
0-item

Abstract and 1 Introduction

2 Survey with Industry Professionals

3 RQ1: Real-World use cases that necessitate output constraints

4 RQ2: Benefits of Applying Constraints to LLM Outputs and 4.1 Increasing Prompt-based development Efficiency

4.2 Integrating with Downstream Processes and Workflows

4.3 Satisfying UI and Product Requirements and 4.4 Improving User Experience, Trust, and Adoption

5 How to Articulate output constraints to LLMS and 5.1 The case for GUI: A Quick, Reliable, and Flexible Way of Prototyping Constraints

5.2 The Case for NL: More Intuitive and Expressive for Complex Constraints

6 The Constraint maker Tool and 6.1 Iterative Design and User Feedback

7 Conclusion and References

A. The Survey Instrument

5 HOW TO ARTICULATE OUTPUT CONSTRAINTS TO LLMS

Fig. 1 shows distributions of respondents’ preferences towards specifying output constraints either through GUI or natural language. An overarching observation is that respondents preferred using GUI to specify low-level constraints and natural language to express high-level constraints. We discuss their detailed rationale below:

5.1 The case for GUI: A Quick, Reliable, and Flexible Way of Prototyping Constraints

First and foremost, respondents considered GUIs particularly effective for defining “hard requirements,” providing more reliable results, and reducing ambiguity compared to natural language instructions. For example, one argued that choosing “boolean” as the output type via a GUI felt much more likely to be “honoured” compared to “typ[ing] that I want a Yes / No response [...] in a prompt.” Another claimed that “flagging a ‘JSON’ button” provides a much better user experience than “typing ‘output as JSON’ across multiple prompts.” In addition, respondents preferred using GUI when the intended constraint is “objective” and “quantifiable”, such as “use only items x,y,z,” or “a JSON with certain fields specified.” Moreover, respondents found GUI to be more flexible for rapid prototyping and experimentation (e.g., “when I want to play around with different numbers, moving a slider around seems easier than typing”). Finally, for novice LLM users, the range of choices afforded by a GUI constraint can help clarify the model’s capabilities and limitations, “making the model seems less like a black box.” One respondent drew from their experience working with text-to-image models to underscore this point: “by seeing “Illustration” as a possible output style [among others like “Photo realistic” or “Cartoon”], I became aware of [the model’s] capabilities.”


Figure 2: ConstraintMaker’s user interfaces (1-4) & use cases (5-6). After writing the prompt (1), users can easily specify output constraints using a graphical user interface (2 & 3) provided by ConstraintMaker, and the resulting output (4) is guaranteed to follow the constraints. Additional details of this process is discussed in section 6.


This paper is available on arxiv under CC BY-NC-SA 4.0 DEED license.

Authors:

(1) Michael Xieyang Liu, Google Research, Pittsburgh, PA, USA (lxieyang@google.com);

(2) Frederick Liu, Google Research, Seattle, Washington, USA (frederickliu@google.com);

(3) Alexander J. Fiannaca, Google Research, Seattle, Washington, USA (afiannaca@google.com);

(4) Terry Koo, Google, Indiana, USA (terrykoo@google.com);

(5) Lucas Dixon, Google Research, Paris, France (ldixon@google.com);

(6) Michael Terry, Google Research, Cambridge, Massachusetts, USA (michaelterry@google.com);

(7) Carrie J. Cai, Google Research, Mountain View, California, USA (cjcai@google.com).