Trusted software development company since 2009. Custom DS/ML, AR, IoT solutions https://mobidev.biz
Onboarding AI to products does not always go that smoothly. Some might say it could be either a success, or a failure. But more likely - first results will be useless, and only after several iterations there’d be actual value. So here are stories of what went wrong with AI projects, while evolving. And how eventually businesses managed to set things right. Despite how many books are read, at the end of the day we all learn from our own experience. But, perhaps, those stories, and takeaways, will put at least some pillows for us to land on!
How Much Data Is Enough? This is the question our clients often ask us. But every time the answer is not that simple. Before “feeding” the data to AI algorithms, it has to be cleaned. The procedure often decreases the amount of data that fits. So, what you have is not always what you can use.
Another point - the data has to be explained by the owner. Even if engineers are domain experts - it is a must to make sure they are on the same page in terms of data understanding.
The article “Machine Learning Consulting: Does Your Business Really Need It?” - there are details on how to set an efficient workflow for Data Science and Machine Learning projects.
And to dig into the data for AI, check Unsupervised machine learning to improve data quality.
Meanwhile, here is what our guests have experienced while creating their AI products.
Bernie Caessens, Managing Partner at resolved.be
“A customer of ours came to an understanding that continuing building employee schedules by hand was no longer an option. Mainly due to the vast amount of constraints that needed to be satisfied. So we got engaged in bringing AI automation for scheduling (rostering).
As this project landed on our desk, we explored machine learning approaches to solving resource constraints problems (what a schedule is according to operations research). We had several possible solutions, in terms of constraint solvers available: google CP-SAT (boolean satisfiability), genetic algorithms, Tabu search, etc.
And after getting all of the customer constraints, we started coding them into the solvers. Although it was neither a trivial, nor a straightforward exercise. Nevertheless, pretty soon we could present schedules generated by our solver algorithms.
We had mostly green lights on the constraints, so we were pretty happy with them. However, our customer was not! What we failed to grasp, was that there is a whole context around the specific constraint, that is not always present in the formulation of that constraint. In fact, we discovered that there is a whole set of 'hidden' constraints in the context around laws, interpretation of labour laws, wishes from employees, and so forth. So we had to go back to the constraint identification, formulation, and implementation process, and actually do this together with the customer employees to ensure we fully captured all the nuances.
My takeaway from this, and I think this can be applied to all AI projects is that as data scientists, we should not be too quick to step towards modeling/training our algorithms. Domain specific expertise is an absolute requirement in any AI project! A fact we actually knew very well and even 'preached' about, but that even we ignored in a project that was not in our immediate wheelhouse. In this project, we re-learned this the hard way! It will not be forgotten in the future”.
Fairy tales could be divided into two groups: ones where the hero works hard and struggles to achieve results. And those where things happen by chance. AI is definitely in the first category . Still, the amount of effort and struggle could be reduced to some adequate numbers through good planning, proper expectations, and a bit of luck.
No matter how much expertise we own - Data Science and Machine Learning are about running experiments before jumping into something big. And those experiments should not take years, but weeks to bring about the answer: if this the way we should go.
Here are two examples of how those experiments could look like. There was a need to understand if it's possible to estimate human poses and analyze if an athlete proceeded with the exercise correctly. And it was proven: the technology is capable of covering the use case with pretty good quality.
Another experiment was related to a development of an AR/AI virtual fitting room. This time the answer was no - it's not possible to create the solution our client was looking for. Both cases took less than two weeks each to come up with proofs if it's a good idea to invest or not.
Serhii Maksimenko, Data Science Solution Architect, MobiDev
A client tasked us with the development of a PoC for verification-as-a-service (EVaaS) solution for securing access to sensitive data. The goal was to fast test the idea and explore the market.
Neural Network architecture was designed to identify users visually, and to prevent spoofing attacks. We applied a Very Deep Convolutional neural network (VGG) to check whether it's a real person, or just a picture in front of the camera. And it proved to be working on a PoC scenario.
But as soon as we tried to extend PoC to a full product, there was an issue. If a person was sitting and the chair back got into the camera sight - the system identified the case as a spoofing attempt and declined the access. We spent plenty of time trying to pre-process those pictures. Eventually, we switched to another approach and had to redesign the entire module.
This story has to illustrate a simple truth: decide at the very beginning what you’re building: an MVP, or a PoC. If it’s the first one, then carefully explore edge cases for your major AI feature. Technology choice will depend on it. If you’re after a PoC, then don’t expect to be able to reuse any of its components later as the product evolves. Even if reusing will be possible, let it be a pleasant surprise, don’t count on it while planning.
Malte Scholz, CEO and Co-Founder of Airfocus
Since I run a SaaS company, we introduced chatbots a while ago to help our customer support agents. It facilitated the process a lot and saved us time. However, AI chatbots need to be updated all the time to make sure they are providing correct and relevant information. One time, this was not done well and we ended up having chatbots share wrong answers with our customers. Luckily, it wasn’t anything significant, but it was enough to create a lot of confusion and even more work for our customer support agents. We had to correct the mistake and make sure it doesn’t happen again which is quite time-consuming.
To avoid similar situations, people need to be very careful at all times and very knowledgeable about handling AI. There is little room for mistakes and managers need to hire the best talents to work with AI. I know that some people want to save money, but AI is not something they should take for granted.
Shayne Sherman, CEO of Techloris
A big failing we had that took a while to correct was when we used AI to screen resumes and assess candidates for our company. We could easily identify if people had the technical skills and experience but it was the more nuanced and culture fits that we had a problem with assessing and recruiting the right kind of all-round candidate.
The problem arose as we used AI to infer characteristics of ideal candidates from previous hires that have worked out. We used this base data to hire future staff that are aligned and fit with these characteristics. Our premise was that new hires should do well because they’re similar to those who’ve already succeeded with us; past and present.
The problem manifested itself in that AI logic didn’t just replicate what’s good about our existing employees it also factored in and matched their less desirable behaviors.
Our staff turnover rate was below 70% for the 2 years we used this system. When we reverted to our old hiring system which used cultural and gut feel in hiring candidates, our retention rose back to the 85% mark.
We’ve found that hiring clones didn’t work out as we thought. It proves that a company needs a mix of personalities, culture, background, and skills to form a team.
Let it be my story. A friend of mine once addressed me with a need to estimate his new AI startup and bring numbers to investors. He was looking for how much it would cost to develop a product based on GANs creating visual elements with text and style transfer. Moreover, once created, the same object should be excluded from further results. We both knew that such a solution was unique and complex with solid research to be done. But numbers were important at the beginning.
To make a long story short, the product development budget was divided into two parts: a research budget and development one. It's quite simple to calculate a research budget (PoCs are also here): a full-time engineer (or a team) x time till the product delivery (often a marketing deadline). Research will lower further risks and uncertainty, and help to adjust budgets for each iteration down the road. As for the development budget, it still was raw estimates. But at least we could assume that the research phase will provide a clear way for engineers with no need to go back to square one but somewhere in the middle.
There are four key aspects for Data Science and Machine Learning projects to succeed: research, development, management, and a bit of luck. The more you research, the less luck you need.
Written by Oleksii Tsymbal, Chief Innovation Officer at MobiDev.
Create your free account to unlock your custom reading experience.