paint-brush
How to Transform Your Data Into a Voice AI Knowledge Assistantby@phillcomm
26,292 reads
26,292 reads

How to Transform Your Data Into a Voice AI Knowledge Assistant

by PhillComm GlobalSeptember 2nd, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Chen Zhang and Eric Turkington say voice-powered “knowledge assistants” can be transformational for employees. Voice tech frees us from the physical constraints of keyboards, giving us a faster input method for queries. This is especially true for members of the deskless workforce, employees whose hands and eyes are frequently tied up with their work. RAIN is working with a wide range of enterprises, from auto repair to construction to restaurants, to help realize the benefits of opening up new modalities and moments of data access.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How to Transform Your Data Into a Voice AI Knowledge Assistant
PhillComm Global HackerNoon profile picture


Almost every enterprise believes data to be one of their most important assets, but most would admit they are not leveraging their data to its full potential. That’s because making data easily accessible to employees is surprisingly hard work. It requires a concerted, ongoing effort to gather, structure, and tag data in order to turn it into knowledge, capable of being found in the moments when it can be most useful.


Of course, this has always been true regardless of the state of technology, from ancient libraries indexing vast physical volumes to today’s cloud-based search engines crawling millions of gigabytes of data. We can take for granted the magic of the now-ubiquitous digital keyword search, which has delivered us the power to have any data point just a few mouse and keyboard clicks away.


But does modern search need to be an exercise in typing keywords? Isn’t there a more natural and even more frictionless way to request and consume data, more akin to how we do so with other humans - by simply asking?


Now there is, with the rapid maturation of voice technology as a reliable, fast, and convenient means of interacting with digital technology and the data it houses. Voice tech frees us from the physical constraints of keyboards, giving us a 3x-faster input method for queries, and opens up entirely new use cases for data access when our hands and eyes are otherwise occupied.


As consumers, we’ve grown accustomed to simply asking our smart speakers or mobile assistants to authoritatively resolve any factual questions or trivia disputes, because it's that much faster. Alexa, Google Assistant, and Siri have grown incredibly robust in the knowledge graphs they consult to perform these tricks of effortless data access, and clever in how they return results that balance brevity with some helpful context.


This affordance of voice tech as a rapid, convenient gateway to knowledge can be even more transformational for employees, for whom knowledge access is not a trivial matter of exploring a curiosity, but a fundamental part of doing their jobs. This is especially true for members of the deskless workforce, employees whose hands and eyes are frequently tied up with their work.


At RAIN, we are working with a wide range of enterprises, from auto repair to construction to restaurants, to help realize the benefits of opening up new modalities and moments of data access by employees. By building voice AI-powered “knowledge assistants,” enterprises enable employees to retrieve (and enter) data with greater ease and speed than ever before, which both helps retain employees by alleviating pain points and directly supports healthier bottom lines for businesses through efficiencies gained.


In a technical sense, these assistants function a little bit like a layer cake - each step is crucial to delivering a useful response to any given question. On the top level, the voice interface does its work to parse speech into text (Automatic Speech Recognition). Then, the system must go deeper to derive the user’s intent and meaning from the text string (intent recognition and named entity recognition), map these intents and entities onto a domain-specific knowledge graph made up of taxonomies (categorical relationships between entities) and ontologies (maps of meaning and relationships between entities) in order to retrieve the relevant data points. In case the user input isn't sufficient to resolve which exact entity and attributes to retrieve, or the user utterance is too short or too general, the system will summon its dialog management algorithms to carry out a disambiguation process, in order to eventually land on the answers needed. These answers are then displayed on-screen or spoken back to the user through a text-to-speech engine (the “data presentation” layer).


As this explanation indicates, the sequence of capturing and processing inputs and returning useful outputs can be challenging. But when these assistants work as designed, they are transformational, making an organization's knowledge base searchable in the most natural way possible. There are a few considerations each organization should keep in mind as it explores this exciting new domain of human-computer interaction.


The first involves determining whose needs you can best solve for from a data-enablement perspective - those in your enterprise, or if you’re in the data business already, the customers your business serves - or both. Do your employees routinely struggle to separate the signal from the noise in the data they wade through to do their work? Are there behavioral inefficiencies to accessing this data - such as stopping work, walking across the shop or factory floor to get to a terminal, etc.? Are data requests by customers taking up valuable time from employees and customers alike, and could these be simplified and expedited with the right self-service knowledge assistant model?


Next, organizations need to determine what knowledge base and data feed their users require - and discipline in how data is managed. If general knowledge domains are needed beyond those specific to the enterprise (e.g., to provide weather, news, or sports data), consider leveraging 3rd party providers rather than building it from scratch. Regardless of where data is coming from, it is critical to have a “single version of truth” that is continually curated (and consistently labeled) by subject matter experts and accepted and trusted by end users. Data sets need to be supported by robust APIs that can be queried against and return results in a variety of ways, depending on the need.


After the foundational data assets are in place, the right voice tech stack can be the secret sauce to your knowledge assistant, making it effortlessly accessible. A big decision is whether to go cloud-based or local/on-device. Unless the knowledge base is fairly small and there is a strong set of use cases for no-connectivity queries, organizations’ first stop should be cloud-based providers, from speech recognition and natural language understanding to text-to-speech. Another key choice is to go big or small - big tech companies offer high accuracy, reliable service, and great documentation, while startups can be easier to work with on highly customized solutions.


Once a knowledge assistant is up and running, quality control is key so users always receive the right information. This means having a “golden set” of data--which is of critical importance to users and has been validated--and having a robust regression test suite in place to ensure data augmentations or application changes don’t interfere with prior work.


While the playbook for creating a knowledge assistant is far longer than this article, these points are instrumental pieces of any successful effort to transform latent data into an asset that can be interrogated as easily as a colleague. If your knowledge assistant does its job right, it will start to feel like the indispensable co-worker you never knew you needed.


About Chen Zhang

Chen Zhang is the Chief Technology Officer at RAIN, a leading voice technology company. Chen sets RAIN’s technical vision, leads the technology team, and manages the company’s technical roadmap, development, and partnerships. Over the past 10+ years, Chen has worked in the Voice and Conversational AI fields as an engineering leader and hands-on developer across companies of different scales, from FAANG to Fortune Global 500 to tech startups.


About Eric Turkington

Eric Turkington is Vice President of Growth at RAIN, where he partners with the firm’s enterprise clients to develop winning voice strategies and applications, in addition to overseeing company growth, industry partnerships, and communications. Eric is a frequent commentator on voice tech trends in publications like The Wall Street Journal, AdWeek, and European Business Review.


About RAIN

RAIN is a leader in voice technology and conversational AI. RAIN builds voice-first software products and supports the world’s leading brands on voice strategy and experience development. Backed by Stanley Ventures and other leading investment firms, RAIN is commercializing productivity-focused voice assistants to address major workplace inefficiencies. In addition to building its own products, RAIN continues to partner with industry-leading brands to develop bespoke voice strategies and products across the B2C, B2B and B2E landscape.


By Chen Zhang, CTO, and Eric Turkington, Vice President of Growth, at RAIN Agency.