What Does Iron Man’s Jarvis Have That Your your AI Assistant Doesn’t?

Written by lucaopreacontact | Published 2018/03/25
Tech Story Tags: artificial-intelligence | ux | ui | ironman | jarvis

TLDRvia the TL;DR App

Beyond a discussion of whether or not Jarvis is true AI, and guessing at whether Intel’s new neural chips would be better off with the more volumetric structure of an ‘Infinity Stone’ (my bet is they would be), what really sets Jarvis apart is persistence.

Not just conversational persistence, mind you, which is one of the biggest things lacking in today’s assistant, but also persistence in time and space.

Consciousness has a funny way of being able to derive meaning, or at least usable data, as it navigates environments, and is then able to communicate that meaning in various ways and highlight various aspects of it, and even evolve its own understanding of it according to an ongoing conversational exchange.

It also has a knack for developing contextual usefulness, which is akin to figuring out user stories according to environments. Sometimes it can do all that in real time: develop multidimensional meaning and applicable pathways, and even figure out ways to replicate and enhance their benefits according to context and environment.

Jarvis does this.

He can surf multiple virtual pathways according to domain of knowledge and related taxonomies, according to relatedness of time, space or narrative (a better, and far more encompassing word would be consciousness) clues, and according to conversational requirements, and often according to all three at once.

This is what is actually happening when Iron Man requests some specific bit of knowledge and seconds later, out comes an applicable story. It may not always be perfectly useful, but it is an enrichment upon the currently available data, both in breadth and in vision (interpretation).

Now, vision as such might require human level AI, but maybe it doesn’t.

Maybe one way of pulling it off is through simple and truthful summarizing of interrelated knowledge according to weight of relatedness to clues within the initial conversation/demand, but especially weight of clues available in virtual knowledge.

Interpreting human sentiment and being able to highlight the essential conclusions of human expression, be it article length or tweet length, would go a long way to adding depth to an assistant’s knowledge based answers.

In short, I don’t expect AI assistants to be able to survey battlefield conditions in full spherical flow, pilot a suit, and at the same time live-process both a conversation and endless virtual information and (therein held) human consciousness landscapes, but I do want them to take advantage of their superior speed.

I can google cooking recipes, and I can do so faster than any assistant. I can also input them into a knowledge management system far better than I would be able to do through voice.

Spacially planning my to-do’s in Trello , or a simple text file — that is, adding and structuring them visually — is similarly a far quicker and more intelligent flow than doing it verbally.

Perhaps such things would be improved by a persistent conversation flow, wherein an assistant would already knew where I want specific to-do’s placed, etcetera, but that is a matter of ease, not accomplishment.

Where an AI’s speed would shine, even current AI, is in navigating vast amounts of data, nimbly maneuvering through entire realms of semantic interrelatedness, social graphs, and systems of meaning derived from human expression on a specific topic, and returning an applied summary of such.

Even putting on a tourist’s hat, I want to be able to ask about the most scenic routes around the Tour Eiffel, and get not a link to an article, but a conversational reply giving me several alternatives, with details and highlighting benefits and problems, etcetera.

Additionally, having conversational persistence, I would be able to request the assistant to display a written replica of a specific reply, or a list of sources for the information, or a list of the options already discussed, and then the display of a specific option (in this case a route) on a map.

These are merely quick notes on the topic. I know AI designed for writing is already capable of accomplishing most of this, but the computing resources and time needed to pull off their neat tricks may not comfortably fit into the confines of a user device.

Its probably more than my tablet can currently conjure. But then again, maybe it’s also a problem of design. More can currently be done to move AI assistants towards conversational persistence, systematized memory and more agile space-time intelligence.


Published by HackerNoon on 2018/03/25