By now, all self-respecting executives have heard of A.I and thought “Mmhm, yeah, I’d like to get myself a piece of that action”. And because they’re executives, they told underlings to get it going, and went back to the golf course. I personally see no problem with that way of doing things, as the underlings then go to consultants such as myself to understand what their boss could have possibly meant by “I want, like, Alexa, but, like, for office chairs” (yes, I have a PowerPoint presentation for that).
There are however a few risks I believe should be considered BEFORE replacing all the chair-whisperers by an algorithm. Indeed, mentally asking oneself the questions below before jumping into an A.I project might mitigate risks, save time, money, and make both the BUILD and RUN part of said project a lot smoother.
It does not replace in any way, shape or form the due diligence necessary to get such an endeavor off the ground, but provides a useful framework to start a constructive conversation.
Regardless of their coding or data analysis abilities, the people at the top have a key role to play in defining the strategy for an A.I project. Does the company want to disrupt its market by creating a different type of value proposition à la Amazon? Does it seek to be best in class, à la Amazon ? Maybe it aims to stay level on a competitive market, à la Amazon ? Or even catch up to a leader, à la Amazon ?
You know, I’m starting to sense a trend.
Without being given such a direction, teams will be left to aimlessly dig through data, looking for a story. And with no clear and agreed-upon goal, they’ll be left chasing a moving target, running the risk of rewriting history as the data comes in. That’s why the strategy defined BEFORE any project kick-off should be specific, measurable, attainable, relevant, and time-bound.
“Everyone else is doing it” is a terrible reason to get into the A.I game.
All A.I projects require massive amounts of data to be of any use : to vulgarise, it’s simply not possible for an algorithm to understand the present and the future without being keenly aware of the past. There is no specific amount of data points that can be given as it varies wildly, but a start-up which has just launched and has no more than 800 clients clearly does not naturally have the resources to launch an A.I project.
If enough data is not available, it either has to be collected internally, which can be incredibly time-consuming (we’re talking years and major restructurations), or gathered through external sources (predicting umbrellas demand, for example, would use weather data freely available to all). It’s important to note, however, that unique data, rather than cutting-edge modeling, is what creates a valuable AI solution.
Garbage in, garbage out.
There isn’t much more that can be said. Any good Chief Data Officer will confirm that data should be treated like a physical product, with its components checked for quality before and after it goes into production. You wouldn’t make a BLT if the tomatoes had gone bad and half the bacon was missing, and shouldn't run an algorithm which has missing or erroneous data. The resulting predictions could not be trusted.
In fact, 80% of the work done when creating an algorithm involves data extraction, cleansing, filling, and normalizing to make sure simple errors can be systematically avoided.
And even then…
Algorithms have the ability to systematically “make” unfair decisions without anyone noticing, or even understanding why, making ethics more relevant than ever. As such, teams should systematically make sure that an algorithm which aims to have an impact (ANY impact) on humans is not plagued by bias. This can be done by checking two things : that the data is representative of reality, and that it does not reflect reality’s existing prejudices.
Easier said than done.
Hiring a diverse staff can help spot the reflection of the relevant social context, but this is rarely possible given the current structure of STEM classes… Alternatively, I’d recommend hiring a “bias detective”, a rare unicorn well-versed in both data science and humanities, to find unknown unknowns within the black box that something as developed as an A.I can create.
Speaking of rare unicorns…
A.I talent is both scarce and monopolised by tech giants. According to the latest reports, there are currently only 22,000 PhD-level experts worldwide capable of developing cutting edge algorithms. And the ones that don’t work for large tech companies are expensive. Very expensive.
This should however not block enterprising teams from creating something beautiful. As mentioned, good A.I is more about unique data than unique algorithms. Any modern data analyst/developer partnership can use the many open-source libraries to teach themselves the basics and score some of the quick wins necessary to convince the big wigs to go on a hiring spree (I advise starting with TensorFlow).
In any case, it is likely that everyone will have a bit of data science in them within the next few years, as it will become part of a collective skill-set required of swathes of employees (knowing how to use the Office pack, for example, is a given nowadays).
Even if a company has dozens of talented Business Process Owners (often unloved, yet key to all the above), Developers, PHD-level experts and Data Scientists, it will be incredibly hard to get a project off the ground if they’re not made to work together.
Firstly, if the talent is not centralised, these employees will have little work satisfaction as a common goal with the people around you does wonders for motivation. Secondly, data science requires that Statistical, Computational, and Business sides of the business communicate 24/7. So get those people an open space and Post-Its. Thirdly, if all these fine, talented women and men answer to different bosses, it is likely that different goals will emerge, as well as miscommunication and political games.
IT and business infighting is just not productive.
Change management is key here. Speaking of…
We’ve all heard stories of automation and redundancy. And these stories are (mostly) true. This can mean a certain amount of fear within an organisation once an A.I project is announced. “Will it replace jobs ?” “Will I have to undergo further training or be let go ?” “Will one part of the business get to make decisions that were once made by another department ?”.
Change is rarely appreciated, and has to be approached through a mix of top-down education and bottom-up consultations, which can take time. It is however necessary.
Getting support from ALL levels of the organisation is paramount for a successful project.
Beyond the occasional internal buy-in, a whole culture has to be developed if a project is to be more than a fling with data science.
I could use a lot of metaphors for this specific matter of interest. Icebergs. Football. An Italian civil engineer, economist, and sociologist… Yet I shall stick with the BLT sandwich : when you put tomatoes, lettuce and bacon between those two pieces of bread, you’re at the very end of a process involving hundreds of workers, and thousands of hours of development. Data science is roughly the same :
The algorithm itself does less than 10% of the work.
In fact, an algorithm resides within an ecosystem which relies on :
Data collection, Data verification, Workflow management, Service infrastructure…
But this itself is part of a wider ecosystem made of
APIs (application programming interface), Data storage, DataViz solutions, Monitoring processes, Cyber-security…
If such an architecture does not exist within an organisation, great : it’s easier to start from scratch. If there are, however, existing elements, it is very possible that some sacrifices will need to be made.
There are currently dozens of high-level discussions happening around the world on the matter of A.I and the need for it to be regulated; Deepfakes, facial recognitions, dark patterns, Autonomous Weapons, systematic bias… all have wide-ranging ramifications, and the ability to harm millions if unchecked. Soon enough, the number of such matters being discussed at government levels will reach the thousands, as a myriad of laws are likely to be passed to ensure the fairness, safety and transparency of algorithms.
That’s the best we can hope for…
This however means that A.I projects are often moving in unknown legal territory, and could become subject to wide-ranging legal checks at a moment’s notice.
Checking not only current regulations, but being aware of the ones being discussed has always been key in the corporate world, and shall remain so.
Gathering the right data, hiring the right people, reorganising both systems and employees… all of this takes time. A LOT of time. As such, it’d be foolish to say that a dying company could be saved by becoming “an A.I company”.
In fact, if a company is in a time-sensitive crunch, A.I is probably not the answer.
This highlights the need to avoid reactionary thinking when devising a strategy, as a company doing so is doomed to play catch-up for the rest of its short lifespan.
And so we have gone back in strategy territory, thus completing the loop.
Good luck out there.