paint-brush
Intelligent Agents + Thingsby@vijaysundaram
1,346 reads
1,346 reads

Intelligent Agents + Things

by Vijay SundaramNovember 13th, 2016
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Artificial Intelligence is <a href="https://www.cbinsights.com/blog/artificial-intelligence-startup-funding/" target="_blank">all the rage amongst founders and investors</a>. And for good reason: regardless of where you think we are in the hype cycle, it’s increasingly clear <a href="http://www.inc.com/andrew-ng/why-artificial-intelligence-is-the-new-electricity.html" target="_blank">AI is <em>eventually</em> going to touch everything</a>. The questions now turn to when and how it will impact specific markets and categories.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Intelligent Agents + Things
Vijay Sundaram HackerNoon profile picture

Artificial Intelligence is all the rage amongst founders and investors. And for good reason: regardless of where you think we are in the hype cycle, it’s increasingly clear AI is eventually going to touch everything. The questions now turn to when and how it will impact specific markets and categories.

I’ve been particularly excited about the consumerization of AI and the impact on everyday products and platforms for consumers and professionals. There’s a tendency to reduce AI to machine learning (ML), the subfield primarily responsible for AI’s resurgence, but ML is just one part of a broader story.

I believe AI enables a new paradigm for how products are built by creators and experienced by users. If Computer Science sparked an era of software-driven programs, Artificial Intelligence is sparking an era of data-driven agents. It’s these intelligent agents that become the products of an AI-first world.

Agents + Things

Agents have a few essential traits: They’re designed to be autonomous, ambient, context-aware, proactive, and even full of personality. They’re equipped with fuzzy interfaces like voice, natural language and vision. They’re architected to sense, learn, decide, and act in environments with less structure (imperfect information, uncertainty, non-determinism, etc.). They’re powered at their core by data and algorithms wrapped in commodity software. And they’re deployed across a constellation of things around us, from PCs and phones to wearables and thereables to robots.

None of these traits on its own is necessarily new or transformative. You can see hints of them poking through the current apps + mobile paradigm: voice assistants on the phone (e.g. Siri), notification-centric apps that deliver content and support interaction directly on the lock screen (e.g. Now, Yo), or apps that passively learn and take actions in the background (e.g. Swarm).

But the synthesis of these traits shapes a new agents + things paradigm. This AI-enabled era, like the Internet- and mobile-enabled eras before it, creates space for new approaches — new strategies, architectures, platforms, products, experiences — that have the potential to transform existing market categories or create new ones altogether.

Voice Assistants

There are already a few highly visible “expressions” of agents + things. The shining example so far is the home-voice-assistant-speaker-thing category pioneered by Amazon Alexa + Echo and, now, Google Assistant + Home. CIRP estimates Amazon has sold 5.1M units since launching late 2014, not including this holiday season which saw Echo device sales up 9x over 2015! [Update: CIRP just released a report estimating 8.2M customers own an Echo.]

For now, these products are simplistic, voice-activated assistants and skills embedded in your home environment. It’s only a matter of time before they also become proactive (e.g. nudge to get your attention), context-aware (e.g. detect presence, identify speakers), adaptive (e.g. learn from your behavior), and anticipatory (e.g. take predictive actions on your behalf). AI and ML are lynchpins to making this work, from the user interface to the agent behavior.

The agents in these products will also operate on as many things as will host them. Alexa can already indirectly control an ecosystem of smart home devices, and will soon work directly on a crush of 3rd party IoT products integrating her voice capabilities and custom skills. Google Assistant, too, takes shape outside of Home as a voice assistant on the Pixel phone and a bot in the Allo app. In time, agents will become loosely coupled with many devices rather than tightly coupled with mobile, as apps have been so far.

Chatbots

Another expression of agents + things can be found in text-based chatbots on messaging platforms such as Messenger, Slack, and Skype. Analogous to rooms in the real world, messaging apps are data-rich, semi-structured environments in which bots can be ambient, context-aware, proactive, and so on. And they run across a wide range of OS and hardware platforms (e.g. iOS, Android, OS X, Windows, etc. and PCs, phones, watches, etc.) as well.

Early indicators show the bot economy is growing faster than the app economy did, but the AI promise of bots is still contentious. The category can be hard to make sense of because it lumps together three separate trends: conversational UI, new messaging platforms, and deep AI/ML technologies. Most bona fide “bots” only incorporate the first or second of these trends, but a small subset at the intersection of all three represent agents as characterized above. So even though most bots don’t embody AI-enabled agents, and many of them needn’t, some of them do.

It’s in this sense that chatbots are just another embodiment alongside voice assistants — not to mention new modalities like VR/AR-based characters and physical robots! In fact, the ultimate expression of agents + things is the choreography of user scenarios fluidly across environments (home, office, car, transit, …), devices (PC, phone, watch, hearable, …), and interfaces (voice, text, vision, …) through this multiplicity of embodiments.

Endgame: Ubiquitous Computing

The endgame of agents + things may well be the ubiquitous computing era that Mark Weiser and John Seely Brown speculated about two decades ago:

The “UC” era will have lots of computers sharing each of us. Some of these computers will be the hundreds we may access in the course of a few minutes of Internet browsing. Others will be imbedded in walls, chairs, clothing, light switches, cars — in everything. UC is fundamentally characterized by the connection of things in the world with computation. This will take place at many scales, including the microscopic.

They go on to outline the era’s attendant challenge of calmness, a powerful rallying cry for realizing many of the long-standing promises of AI and IoT:

The most potentially interesting, challenging, and profound change implied by the ubiquitous computing era is a focus on calm. If computers are everywhere they better stay out of the way, and that means designing them so that the people being shared by the computers remain serene and in control. Calmness is a new challenge that UC brings to computing. When computers are used behind closed doors by experts, calmness is relevant to only a few. Computers for personal use have focused on the excitement of interaction. But when computers are all around, so that we want to compute while doing something else and have more time to be more fully human, we must radically rethink the goals, context and technology of the computer and all the other technology crowding into our lives. Calmness is a fundamental challenge for all technological design of the next fifty years.

Time will tell when and how all this plays out, but the seeds of paradigm-changing products, platforms, and companies are being sown today. 🤖