Today is a big day for Hasty. After many months in closed beta, we are finally ready to launch Model Playground into open beta. We think it’s very cool, but why should you care?
During the last few years, we have talked to many, many AI teams. One of the things we heard repeatedly is that the vision AI software stack has grown out of control. Most teams use one tool for annotation, another for data curation, a third for experiment tracking, and a fourth for serving models to production. That is not even including what organizations have built themselves for integrating all these tools. Today, companies are forced to maintain and connect their own data lakes, integrate data pipelines, or even maintain their in-house tools.
All this software and engineering comes with many problems. A summary of what we heard is:
Our solution to all these problems outlined above is our new Model Playground. It combines with our existing Hasty product to deliver a true “one-stop-shop” for your vision AI needs. Now that we have your attention, let’s get into what makes Model Playground so unique.
In short, Model Playground is a model experimentation and building environment where you can train and benchmark models on your data without writing any code yourself.
“Aha!,” you might say, “another no-code solution that has simplified model development down to a couple of clicks.” No, that’s not what we are talking about here.
We’ve seen many no-code solutions. We think they are fine for playing around, but almost all existing no-code solutions don’t give you absolute control of the parameters and come with a limited number of architectures.
We, on the other hand, don’t disempower ML engineers. We don’t know your use case. We haven’t seen your data. Based on that, simplifying model development when our primary customer base is ML engineers and solutions architects would be foolhardy. We can’t deliver the results. You can!
So, we engineered a solution where you still pick everything, from architecture to transforms, yourself. We just take care of the boring parts: running the experiment and giving you insight into the training progress.
We started as an automated annotation tool three years ago. Today, we are (probably) the fastest annotation tool still on the market. But why is this interesting in the context of Model Playground?
As we already handle our customers’ data when they create training data (either on our servers or by connecting to your storage) we now let them run their experiments in the same environment. This means no data loaders or extensive import/export jobs are needed anymore, saving you time to do more meaningful work.
We include data versioning to ensure that models are trained on the correct data, no matter who is setting up the training - so you need no more spreadsheets to track this. With Hasty, you now have a single source of truth for your vision AI model.
We have taken our time to ensure that you have plenty of architectures, optimizers, schedulers, etc., on launch. As a team with many ML engineers ourselves, we didn’t want to limit our users to a few select options because we think it’s important to experiment and figure out what works as you go; without spending hours getting architectures to work or reading through configs to figure out what’s missing to get that new optimizer to run.
We’ll continue adding more parts to our Model Playground, but as of today, we have the following building blocks available:
We created a suite of building blocks for you to rapidly set up experiments and play with many different configurations. No more spending hours getting architectures to work or reading through configs to figure out what’s missing to get that new optimizer to run.
Get to the ideal setup for your use case faster, and focus on producing results.
Is that one killer building block you need missing in the infographic above? Contact our Product Manager Kristian at kristian@hasty.ai, and he’ll make sure we look into it as soon as possible. It doesn’t take long to add things to our framework! ;)
Data creation and curation tend to be the most expensive parts of any AI project. Today, a lot of the work is done either manually or using pre-trained models to pre-label data. These come with drawbacks.
Firstly, why is it so hard to use your models to help with annotation? You have something that works. So, how do you get that model to assist with data creation and curation? Sure, you can pre-label the next batch of data, but this will scale any issues your models have. We have seen teams spend more time cleaning this up than they would have spent labeling 100% manually.
We think we have a better solution. Our annotation environment uses a human-in-the-loop approach so that you only take the good predictions from the model and ignore the bad ones. Our experience shows this results in much faster data creation.
With the launch of Model Playground, we also offer you the ability to replace our default models with your custom ones. When you are happy with how a model performs, you can switch models in two clicks. That way, what you do on the model side directly benefits your data workforce.
Additionally, by using your model for annotating and curating, you will get a much better idea of what works and what doesn’t. Deploy your model. Annotate a hundred images and see how the model performs on new data. Creating this feedback loop between data and model will allow you to find systemic issues way before users experience them.
We are running an alpha to bring your models directly into our environment. If you want to have access or know more about that, Tobias (tobias@hasty.ai) is your guy.
One of our investors, Issac J Roth at Shasta, had an excellent explanation for why he wanted to invest in Hasty. Having been around the startup scene in San Francisco since the ’80s, he saw that what many teams are struggling with today when working on ML projects are the same issues software engineering teams had before the boom of DevOps solutions. In short, you have to be your own plumber.
Today, you don’t have to build your own code version control – you use some flavor of GIT. You don’t need to write an integration by hand to run tests and deploy – you use some continuous integration software. However, if you are doing ML, it feels like DevOps used to feel back in the day.
For most projects we’ve seen, this means building and integrating many different software and services into a training pipeline. Beyond being a massive engineering task, this also comes with a host of problems. What should be a small change takes a long time to refactor as you’ll need to fix different parts of the pipeline. Integrations stop working as APIs change without notice, and you end up having to go through logs to figure out what exactly went wrong this time.
With the launch of Model Playground, this is no longer necessary. By combining data cataloging, annotation, and curation with model building and experiment tracking features, you no longer need to build that significant pipeline – as everything is in one tool.
(Of course, there are still bits and pieces that you will have to code yourself for the actual application, but we hope that we can remove a sizable pain from your AI development processes.)
So you can train a model and use it to speed up data work, but that’s not the end goal. You want to get a model into production (or, at least, your boss does).
At this point, you might be wondering, “But how exactly are they going to limit me in how I use my model?” That’s a valid question. Many solutions out there offering something similar to Model Playground are walled gardens. They calculate that if they have a working model in their environments, they can keep squeezing you for money. Others allow you to move your model out of their environment but place other restrictions. You might be able to export it but only to their formats.
We didn’t like either approach when we were on the other side of the table, so we decided to go another way.
In Hasty, you can export any model trained in Model Playground in TorchScript and ONNX formats (TensorFlow is in development). We support ARM, x86, Texas Instruments, Jetsons, and a smattering of other processors. This means that any model you train with us can be used on most hardware and in most environments.
If you prefer a managed hosting solution, we provide that too. You can leave the model with us and access it via API. We only charge you for usage to make it as fair as we can, not on upkeep. Therefore, you only pay when you and your users get something of value back.
In keeping with the metaphor, our garden is not walled, but rather full of attractions that we believe will make you WANT to stay there. We managed to convince our internal AI team - now we hope to do the same to you.
Three years ago, the founders of Hasty were all working for a digital consultancy serving German Hidden Champions (the Mittelstand). We worked on many vision AI projects where we had to deliver production-ready models to European manufacturers, covering every use case from quality control of metal parts to automated urine analysis (yes, that’s a thing).
Back then, we hit many roadblocks when using new software. We would get quoted a five-figure for a PoC from an annotation tool (don’t even get us started on model hosting or upkeep!). So we were forced to build a lot of software ourselves that we could have bought if the pricing model had been more reasonable.
For many, this is still the case. In no space do you have such opaque pricing as in ML. Most ‘pricing pages’ have a “contact us” button. One company even told us, “what’s wrong with arbitrary pricing?” Having experienced it first-hand, we understand the frustration and we hope to alleviate some of those pains.
Our philosophy has always been that we charge you when we deliver something of value. For us, that means AI automation or heavier computation where you pay-per-use. Our pricing model scales with your use case.
Starting today, anyone with a Hasty account will also receive access to Model Playground. To get started, you only need to go here:
If you don’t have an account yet, you can sign-up for free and try us out. It’s easy to import your data (or use the demo you get when signing up) and then take Model Playground for a spin.
If you want to know more about Model Playground, you can also check out our documentation or look at the video we are releasing tomorrow, where we do a walkthrough on how to train a model in Hasty. We also built a wiki to explain all ML terms needed to run Model Playground.
If you have any questions or feature requests, feel free to direct them either to me ([email protected]), our Head of Strategic Projects Tobias ([email protected]) or our Product Manager Kristian ([email protected]). If you are really eager, why not email all three.