paint-brush
Tackling the Dirty Jobs of Data and IM with Artificial Intelligenceby@spydergrrl
614 reads
614 reads

Tackling the Dirty Jobs of Data and IM with Artificial Intelligence

by spydergrrlJune 4th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>This is the second in what will inevitably be a series of AI-related posts. See my first post, </em><a href="https://hackernoon.com/artificial-intelligence-ux-the-future-of-findability-e4b5088f47b1" target="_blank"><em>Artificial Intelligence, UX and the Future of Findability</em></a><em>.</em>

Company Mentioned

Mention Thumbnail
featured image - Tackling the Dirty Jobs of Data and IM with Artificial Intelligence
spydergrrl HackerNoon profile picture

This is the second in what will inevitably be a series of AI-related posts. See my first post, Artificial Intelligence, UX and the Future of Findability.

Every organization has dirty jobs that it should be doing, but that no one wants to do. Work that is so big and daunting and tedious, that it gets shelved for a rainy day, or started dozens of times but never finished, or worse that keeps getting re-invented into yet another un-scalable pilot project.

In our organization, we have a lot of dirty jobs related to Information Management: classifying documents, cleaning the metadata on web pages, defining and assigning taxonomies for search… Lots of these projects start but they never seem to end: frameworks are developed but applying them is just too time consuming, especially since we generate and hold so much data.

Retroactive work on existing records is difficult and slow. With budget cuts, it’s hard to justify taking resources away from day-to-day business to clean up old data. So, maybe we just work differently from today forward. Now we have disparity between the old and the new.

Worse, no one changes how they work to incorporate these IM practices into their day-to-day (in fact we were trained not to assign metadata in the new document management system) so everything just gets messier and messier as time goes on.

Over time, the problem gets bigger and more unwieldy, making it difficult to find information, making us less productive, making our sites less usable, and making us generally inefficient. The idealistic frameworks become dusty and outdated. And then someone decides we should just start anew, kicking off some initiative to define a framework to be used from this point forward. And here we go again.

What if we stopped trying to redefine ourselves and innovate, and instead turned our focus to solving the problems we are ignoring? What if we used AI to just make things work? Could that be the innovation?

We can use tech to do the dirty work for us. We can assign AI to do the work that needs to get done, but that we aren’t actually doing.

Earlier this year, during the business case phase for my project, I realized that there was important work that would be so time consuming, it might never actually get done during the project. We wouldn’t have the time or the resources to do the work by hand, but it was necessary for our success:

  • We need to create an ontology to improve the web search for a specific type of information;
  • We need to help users figure out where to direct their request, among 240+ institutions;
  • We need to provide users with a “simple” way of accessing a service (yes, our mandate actually includes the word “simple”).

These are info and data challenges that AI can help with:

  • mapping out a subject-specific ontology using existing data and semantic analysis;
  • providing recommendations using that ontology and natural language processing; and,
  • providing support to users using both of the above and a chatbot interface to help them provide all the data we need to deliver the service and fulfill their request.

In the past, we tried to this kind of work by hand and we failed. Mapping out relationships between different data sources, creating new taxonomies, applying them to data, making constant improvements… things got really big, really quickly and it all became overwhelming. To meet deadlines, we scoped down to a minimum sustainable product but the end result wasn’t as good or as usable as it could have been if we’d completed the work. We just didn’t have the time or the capacity, no matter how many humans we could throw at the problem.

This is where I think the AI sweet spot lives: This isn’t necessarily about big data by the industry standard. It’s about data that’s too big to be processed by hand and therefore wouldn’t be processed at all (or at a very minimal level). It’s about solving the problems we can’t solve on our own, providing that bump in capacity we need to work better and smarter and be more efficient, to do the work we know we need to do but just can’t manage.

The funny thing is, this work isn’t even frivolous. It’s quite meaningful. It can be the difference between launching a service that can be used and launching a service that is truly usable. Between meeting a deadline and delivering something of which we can be proud. Between doing the bare minimum and exceeding users’ expectations.

So, yes, it’s a dirty job with a noble purpose. But it’s still a dirty job. And that makes it perfect for automation. I’m convinced that there are dozens of uses like this for tech throughout our organizations. But if we spend our time chasing innovation, we might miss the benefits we can realize in short order by tackling the dirty jobs first.