paint-brush
AI Workflows and Modern Application Design Patternsby@artemivanov
4,653 reads
4,653 reads

AI Workflows and Modern Application Design Patterns

by Artem IvanovDecember 8th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Artificial Intelligence is reshaping user experience design, introducing the intent-based interaction paradigm. Traditional interfaces are giving way to more natural interactions, where users express their desires, not commands. Chatbots, primary AI workflows, contextual interactions, and invisible AI systems present diverse patterns, each requiring unique design considerations. Designers grapple with challenges like cognitive load and interface intuitiveness. Embracing best practices, multiple output options, contextual prompts, and user feedback, AI-driven UX strives for seamlessness and user-friendliness. The evolution of AI in user interactions promises transformative experiences with careful consideration of input, processing, and output stages in various workflows.

People Mentioned

Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - AI Workflows and Modern Application Design Patterns
Artem Ivanov HackerNoon profile picture


User experience design is constantly evolving, but the current rise of artificial intelligence has completely upended the entire field. AI is ushering in a new era of interaction with a new paradigm. Traditional user interfaces are based on a command-and-control pattern, where users tell the computer a series of commands to achieve some user tasks. However, AI is making it possible for users to interact with computers more naturally, by telling them what they want, not how to do it.


This new paradigm of interaction with AI, known as an intent-based paradigm, is still in its early stages, but it has the potential to revolutionize the way we interact with computers. For example, imagine being able to tell your computer, "I want to book a flight to Paris," and having it automatically find the best flights and book them for you. Or imagine being able to say, "I need help with my taxes," and having your computer walk you through the process step-by-step. It opens a lot of absolutely new opportunities to design "ultimate” UX and make it completely holistic and seamless.


Intent-based interaction is not without its challenges. Modern language learning models also known as LLMs, already are very good at understanding natural language. However, the state of AI in UX is far from perfect, the current chat-based interaction style suffers from requiring users to write out their problems as prose text, so it produces a high cognitive load.


Additionally, it can be difficult to design UIs that are intuitive and easy to use for this type of interaction. However, the potential benefits of intent-based interaction are significant, and UX designers are already exploring how to implement this new paradigm best.


In this article, we will explore the rise of intent-based interaction and its implications for UX design. We will discuss currently existing types of AI-driven products, in which way they use input and output patterns, and how they are designed to make user experience in the AI environment better.


Content Overview

  • How AI changed the designer’s work
  • Main types of AI workflows in products
    • Chatbots
    • Primary (AI-first)
    • Contextual
    • Invisible
  • Conclusion

How AI changed the designer’s work

As we already figured out, the new challenge for designers, who are working with AI products, is to design in intent-first paradigm framing. When you think of traditional software, the way you are interacting with it, is you sending a chain of commands inside the system to get a desired output. Your input is a command, it can be anything interactive on your screen, like buttons, dropdowns, forms, etc. Combinations of your actions in your GUI form a command, which leads you step by step through your journey.


Command-based interaction


After the set of steps, navigating information architecture in a product, you are finally getting a solution to your problem, the output of the system. For example, when you are trying to order a taxi, you are sending a set of commands to pick a destination point, set up ride parameters, and finally, send a command to confirm a drive, when the system picks a driver for you.


Intent-based interaction is narrowed down to the input-processing-output system. You literally give a system what you want to get in the outcome (so-called prompt), the system processes your input and gives you an output. All the steps of calculation are on the system, you get only what you need. Back to our example with the taxi, in the intent-based system, you only need to give your prompt, ("Order a taxi to home”), and you will get a ride.


Intent-based interaction


This paradigm will still require commands in workflows, because machines can be mistaken and humans sometimes may want to modify and control an output or use it in different flows, to better set up the system’s behavior to their goals.


Main types of AI workflows in products

Let’s agree on these namings, however, when we integrate AI into user flow it can work differently depending on product workflows, tasks, problems, technical features, and other limitations. During the process of our research and analysis, certain patterns in the products are visible, let’s take a look at them. Also, we will tell you how this or that product uses the input-processing-output pattern, what UI solutions it uses, and how it solves usability problems standard for AI products. So let's get going.


1. Chatbots

Let’s talk about the most obvious example. The current hype around AI is mostly about chatbots and their capabilities. In this pattern, using LLM, AI workflow occurs through the back and forth of dialog through a chatbot interface. The user interacts with the chatbot by having a conversation with it, and the chatbot responds to the user's questions and prompts. Usually, this type of workflow can be used for completing a wide range of tasks and solutions, most broad requests, studying new topics, etc.


However, the current chat-based interaction style creates a high cognitive load for users, as it requires them to write out their problems as prose text. To address this issue, a new role has been developed: the "prompt engineer." Prompt engineers are responsible for eliciting the correct results from ChatGPT by providing the appropriate prompts. In other words, chatbots require a lot of attention to their usability design.


ChatGPT is a well-known chatbot and LLM


Pi, another example of a mobile chatbot


Chatbot inputs

Chatbot products usually use text prompting input patterns. This approach allows users to type and send any request in the form of text to the system and write anything they want. It provides the broadest possibility for the input and output results.


User prompting pattern usually appears as a text field UI element. In chatbots commonly it stays fixed in place.


ChatGPT uses a text field, placed at the bottom of the page


Google Bard's text prompting


In combination with prompts, pre-written prompts are also often added, thus reducing the user's misunderstandings and misconceptions of what to do with this chatbot, providing suggestions, and simplifying choices.


ChatGPT shows user prompt examples at the beginning of a new chat


Also, chatbots are quite often suggesting to use of voice input to fill the text input field, letting use your voice instead of keyboard typing. It creates an almost organic dialog between a person and a computer.


For example, Bard provides the ability to enter a hint using the user's voice input


Chatbot processing

It is important to show what state the system is in and how it processes the user's request. Different applications use different approaches. The most common approach in chatbots is real-time text generation. Since the algorithm can take a while to deliver a result, showing text generation on the go is a good practice, allowing you to keep the user's attention focused and make the transition between input and output more seamless.


Pi chatbot assistant shows the output in real-time process of generation


Chatbot output

In chatbot applications, the types of output can vary greatly. This can be text, images (generated by text input, for example), or other results that depend on the target topic of the application and its functions. It is important that the output copywriting reflects the user input and the desired character and personality of the app (if relevant).


If you're going to design a chatbot, you should consider the best practices that other apps already use.


  1. Multiple outputs. Since the results of the system may be different and sometimes not of high quality or the system may misunderstand the user's request, it is a good practice to add the possibility of multiple outputs to increase the chances of the system guessing the query the user wants to see.


Bard suggest three drafts of the system respond


Bing Image Generator creates multiple images on a single prompt


  1. Apologize for the inaccuracy. The artificial intelligence within the system might produce incorrect outcomes that lead to confusion, offensive content, or a sense of unease for the user. The system apologizes for any potential inaccuracies that might arise.


Bard apologizes in advance for potentially inaccurate results


  1. Saved interactions. Remember recent actions of users within the system, allowing us to refer back to them more easily. Showing recent destinations, searches, and other inputs can be a helpful nudge for easing the cognitive load.


Bard shows recent conversations history



2. Primary (AI-first)

This type of product is very similar to chatbots, powered by language learning models, except that their positioning is narrowed to specific use cases, and they can produce very different types of results, from pictures to complex interactive answers to questions in a specific industry.


These products use AI as their primary workflow through full-screen interactions and step-forms.


Copy.ai looks like a chatbot, but its primary function is narrowed down to help you write marketing copy


DALLE’s prompt input takes first place in systems workflow, but outputting it in the form of image-generating


Input for primary type

As we already mentioned, input looks similar to chatbot input patterns, which usually look like the prompt field for text. Best practices are also left similar, such as prompt templates (to help start your process of thought), and voice input.


Copy.ai prompt field with the possibility to set up input parameters


Also, sometimes it makes sense to let the user set up some parameters if they can be applied to the system's output. It means the usage of criteria sliders and other standard UI patterns, which is familiar to the user.


App for generating graphic assets Recraft uses criteria slider for setting up the output level of details


Processing for primary type

Processing patterns consistent with the other workflow types, the best practice here is to generate systems output on the go simply. But when designing processing for AI-first workflows, pay attention to the output, if it's possible to show it part by part. For example, if an image can be shown in the process of generation, or your technology only allows you to show it completed.


If this is the case, consider providing granular progress messages during calculations.


Bing Creator shows the progress bar while generating an image



Output for primary type

In AI-first workflows, output types may vary from images to different UI structures (or even dynamic UI elements), depending on what the system is trying to give its user, and what solution it tries to achieve.


When designing this type of workflow, consider using multiple output results, allow fine-tuning an output to let the user achieve the desired result, allow re-prompting, and provide a possibility to add feedback about the system’s work.


Bing Create shows multiple images from a single prompt


Copy.ai shows two thumbs-up/thumbs-down buttons for rating an output from the user’s side


Recraft allows infinite repromting of existing image




3. Contextual

In this type, AI workflow is added on top of the existing primary workflow through triggers and contextual actions. Using LLM, it offers solutions to contextual tasks through various UI elements.


ClickUp suggests using their AI assistant in the context of the features already offered


Linear offers its AI services to quickly build complex data filtering



Input for contextual workflows

Inputs in contextual workflows depend on the types of tasks and can vary greatly. It can be a textual input, prompt templates, or buttons for enabling specific tasks, such as text summarization.


When using this approach, consider designing the activation/deactivation process, which way the user triggers these prompts or commands.


ClickUp triggers its AI assistant when inputting AI in the document and allowing to choose between a set of pre-constructed prompts


In upcoming Dovetail AI features, you’ll also be able to summarize data from various sources. For example, simplify a lengthy support conversation or turn an hour-long customer interview transcript into a few bullet points.


Prompt building

This pattern appears as guided wizards that help users build detailed prompts without writing them, and uses different UI input elements, such as text fields, dropdowns, radio buttons, and others. All of this combines into form structures, which adapt to the context of the task and allow to split a complex prompt into small logical parts, reducing the load on the user, and freeing him from the need to think a lot about the format of the prompt. This can be especially useful when you have a frequently repeated query with the same structure, then you can let the user fill in only the parts that change, so as not to bother writing a new prompt every time.


ClickUp lets users fill up the form to create a detailed prompt due to the context of its workflow


Processing for contextual workflows

Not surprisingly, contextual systems require a contextual approach to processing design. Again, the right way highly depends on the types of tasks software is required to accomplish, but general practices stay the same. If possible, show the output of the results as they are generated. If not, show the step at which the algorithm is currently working and an explicit indicator of processing (loading icon or progress bar)


ClickUp AI generating output on the go


Output for contextual workflows

When designing output for integrated AI workflows, allow a user to check output in the context before applying it to the primary workflow. This will allow the user to verify the correctness of the result generated by artificial intelligence and, in case of incorrect results, to delete or change it.


ClickUp enables checking generated text before it can be inserted inside the document


Combine benefits of the both command-based and intent-based interaction approaches. Depending on the task, which is your product solving, let users modify and use the given output to achieve their goals. Provide as smooth and easy flow, as possible.


For example, in situations where there is a lot of body text, users with cognitive or literacy issues want to know what is contained within that content without having to read the whole text. The user can click to view a shortened and simplified version of the text that has been generated by an AI.


Bard uses a dropdown menu to show possible response modifiers, which allows making the result shorter or longer


It is also a good practice to store and display recent queries to allow you to quickly return to them if necessary.


ClickUp shows user’s recent prompts



4. Invisible

Invisible AI workflows can be considered the most traditional ones. These types have been around for a long time, and they appeared long before the recent hype of language models. "Invisible" AI is literally invisible because the system processes user actions in a background mode. Working autonomously, machine learning algorithms try to find relevant content, improve prompts, and calculate and analyze behavior. Touchpoints and interactions in invisible systems are minimized


TikTok recommended videos as an example of the invisible workflow. Analyzing user’s behavior and interest, their AI algorithms predict what will be the most interesting for the user.


Invisible systems also can assist users in accomplishing various tasks, interrupting their workflows. Suggestions and autocompletion adapting on the go are ways of implementing this idea into practice. Autocomplete features offering up multiple suggestions at a time is also a way to reduce mistakes and when the AI system isn’t sure what a user wants, allowing the user to choose from a selected line-up rather than feeding them a single option. Still, pay attention, even if certain AI-fueled corrections are sensible, they can still be wrong, and overriding them shouldn’t be difficult. So designed to allow to accept, edit, or reject AI suggestions.


Dovetail provides computed suggestions for surfacing relevant tags for faster analysis, which are seamless and related to the current task, approaching the principles of invisible interaction


When creating the invisible experience, make sure the system displays relevant information, based on the user’s current activities and prioritize personalized recommendations. Update provided recommendations quickly and often.


Showing recommendations to the user, clearly state the source of data, and explain why a specific outcome was predicted or suggested.


Spotify shows the source artist, explaining why it picked these playlists


Make it possible for users of the AI system to express their preferences through regular interactions. Acknowledge user feedback and inform them when adjustments will be made. Instead of simply thanking users, explain how their feedback will benefit them. This will make them more likely to provide feedback again. When a user taps on the dislike button, the system should provide immediate feedback and confirm that they will see less of that kind of content in the future.


Spotify’s remove button allows users to clarify suggestions, informing the system they want to see similar songs less


Conclusion

AI will definitely change the way we interact, with computer systems. However, designing AI workflows requires careful consideration of the type of workflow being created and the target user. Whether your AI system is chat-based, contextual, invisible, or primary, it is crucial to keep in mind the best practices that have emerged from other similar AI systems. Providing multiple output options, apologizing for inaccuracies, and allowing users to express their preferences are just a few examples of the best practices that have been established. Additionally, it is essential to consider the input, processing, and output stages of the workflow and to design them in a way that is seamless and easy to understand for the user. By following these best practices and considering the unique needs of your users, you can create AI products that are effective, efficient, and user-friendly.


Article Credits

Research and Writing:

  • Vadym Oliinyk

Consultation and Editing:

  • Liliya Kizlaitis
  • Veronika Bogush

Produced by: Other Land Product Design Studio