As someone who needs to run a business, big or small, you are inundated with articles and talks at conferences about how great AI is. You hear a lot about what it can do, about the outcomes of some sexy new research, and about vague assertions of how it will transform your business.
In truth, it can and will transform your business, but only if you can overcome the barriers to entry.
There is a lot of focus on the artificial intelligence itself; the machine learning model and its algorithms, its accuracy, and all the amazing new breakthroughs. It’s very distracting. Let me show you:
This is all well and good, and definitely worth paying attention to. But let’s face it, you’ve got a business to run. In most cases, you’re not trying to boil the ocean.
Let’s talk about the real, practical barriers to entry in AI.
Just out of reach
Having the right dataset is everything. If you’re attempting to classify text or images, be prepared with hundreds, if not thousands of examples. But the size of the dataset isn’t the real issue, it’s the scope.
If you’re training a model to detect logos in basketball games, your training set better be examples of logos in basketball games, not just pictures of the logos you pull from the internet. Be prepared to pull frames from your videos, and make sure you have a lot of them. If you expect the model to be able to detect logos from lots of different camera angles, lighting schemes, etc. you’ll need to have that represented in your dataset.
The same is true for judging the ‘temperature’ of your customers when they write support e-mails. You can train a model to track that, which might be very useful in customer support reporting and monitoring, but be prepared to manually label all of your sample data. And it should be YOUR e-mails. You can start with off-the-shelf sentiment analysis models, but to really get something accurate, you’ll need to use your own dataset, representing examples of real e-mails you expect to get at your company.
If this sounds hard, that is because it is hard. Its the hardest part.
It might seem like the right move to do for various reasons. Perhaps you think this it’s important for intellectual property or that it will be cheaper. It’s neither.
First of all, you’d need to hire experts. Your full-stack developers aren’t machine learning engineers. There is a difference between developers and data scientists. It doesn’t mean they can’t be both, but it is not safe to assume that all your developers can build models.
Every problem has a different solution in machine learning. There are a lot of different ways to do machine learning, and the right solution is going to depend on a good understanding of the problem. There is a reason why people have P.h.D.s in machine learning. It’s hard!
Most customer I’ve talked to who have spun up their own machine learning team and attempted to build models to solve problems have seen the work take a better part of a year.
But even if you get a team to build a model for your business, you still have a long road a head of you because it needs to be integrated, deployed, and maintained. Each of those three things are different disciplines in their own right, and require unique expertise to accomplish.
To roll your own machine learning, be prepared to spend a small fortune and have at least 12 months of runway with no guarantee of success.
Deploy, scale and maintain
There are some great web services out there that offer machine learning as a service (MLaaS). These can be a good alternative to building your own model. In some cases, you’ll still have the dataset problem, but you might run into other issues.
Any developer can integrate an API, but there are different levels of quality when it comes to APIs. Some are good, some are terrible. Good is a measure of reliability, ease-of-use, and scalability. The current MLaaS offerings will probably be good enough for some use cases, but if you’re trying to transform your business with AI, you’re going to run into issues with these providers.
All of these providers are public API endpoints. That means you have to send your potentially sensitive or private data up to a public endpoint to be spirited away to some behind-the-scenes process you have no control over. Furthermore, you’re going to have to pay to move your data up to these endpoints. That can get expensive very quickly.
This is also not scalable as these endpoints will have to have throttles to prevent you from overloading their system. They are, after all, hosting a public-facing cloud service.
But probably the biggest barrier is going to be the dataset problem. A lot of them don’t let you train the models further, and if they do, its a separate process that requires a lot of processing and GPU acceleration to function. In can be impractical to, for example, implement the ability to teach a facial recognition system an unknown face in your media, or to correct a result that the model got wrong.
You also run the risk of deprecated APIs which can be a big pain to remedy, or changes to how the models are trained which could upend any ontology or logic you’ve built around the process.
These gaps in the current market of AI is why we started Machine Box. We wanted to provide developers out-of-the-box AI functionality that doesn’t require any machine learning knowledge to operate. We put the models into Docker containers so they can run anywhere and everywhere (on premises or in the cloud), and can scale on demand.
We invented new technology to minimize the amount of data you’d need to train the model. In some cases, the models come pre-trained and no training data is required to get going. In other cases, you can teach the models new things with a single example of something. No more gathering 50 samples of a person’s face, with Machine Box, you only need a single example to get going.
Training on the fly is important as well. If something is wrong, you can teach it to the box and have it learn from its mistakes, on the fly, while you’re in production.
AI is no longer out of reach. Any developer can now implement powerful machine learning by simply running Docker and writing to an API. Maybe we are boiling the ocean!