paint-brush
🎉 Supervisely v2.0: supercharge your training data pipeline with Deep Learningby@deepsystems
4,244 reads
4,244 reads

🎉 Supervisely v2.0: supercharge your training data pipeline with Deep Learning

by SuperviseFebruary 13th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Supervisely helps companies, researchers, engineers, students, and many others make and prepare training data for various number of computer vision tasks from pedestrian detection to tumors segmentation. We’re excited to see more than 2000 people are using Supervisely. A big thank you to everyone helping out improving it!

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 🎉  Supervisely v2.0: supercharge your training data pipeline with Deep Learning
Supervise HackerNoon profile picture

Supervisely helps companies, researchers, engineers, students, and many others make and prepare training data for various number of computer vision tasks from pedestrian detection to tumors segmentation. We’re excited to see more than 2000 people are using Supervisely. A big thank you to everyone helping out improving it!

Today, we’re announcing Supervisely 2.0: it is now public — and this is a big one! So we’re happy to introduce a number of new and exciting features we hope everyone will enjoy.

The aim of this post is to announce new features and tell more about how Supervisely changes the way of creating training data for Deep Learning using Deep Leaning.

It will be especially interested to those who consider computer vision to be strategically important and ready to invest in ecosystem for image annotation at scale for their current and future AI products.

What exactly is Supervisely?

Deep Learning is here to stay. The more training data — the smarter AI. Our mission is to provide the companies with the tools to perform image annotation as efficient as possible.

Before announcing new features I would like to show you a few possible scenarios of how Supervisely can be used.

Use case: image annotation with AI powered tools

The first use case is pretty simple. Upload your data and start manual annotation with our AI powered tools that specifically addresses the annotation process for semantic segmentation task. When annotation is finished, you can download images and annotations in desirable format.

Use case: speed up image annotation with ready to use NNs

The second use case illustrates the usage of pretrained neural network from Model Zoo. After user has uploaded images he can apply neural network to dataset for pre-annotation. There are a large number of examples, where ready to use models can be used for speeding up annotation process: i.e. detecting workers in construction areas with bounding boxes or segmenting humans on selfie images.

The third use case demonstrates human-in-the-loop AI approach. User can train neural network for his custom task and then use this NN to pre-annotate images. Then user should only correct NN predictions. And this process is iterative. NN becomes smarter with time, annotated data is increased, and the process of annotation is accelerating, and user can repeat this procedure over and over again until necessary accuracy is obtained. As a result user get both big dataset with high-quality annotations and accurate NN for his own specific task.

Supervisely v2.0: Neural Networks

Now it is possible to use state of the art neural networks inside Supervisely without any coding.

Let’s consider semantic segmentation example to better understand motivation behind it. Nowadays, semantic segmentation — is one of the key problems in the field of computer vision. Looking at the big picture, semantic segmentation is one of the high-level task that paves the way towards complete image understanding.

Example of semantic segmentation: have to assign each pixel in the image an object class. Left: satellite image. Right: retina vessels.

Before deep learning took over computer vision, people used classical computer vision approaches that are not so accurate. Few years ago it was hard to predict which method should be applied to your specific segmentation task. To answer this question you have to have expertise in this field and make a lot of research.

But now the situation is the opposite. There are a lot of benchmarks and literature on semantic segmentation. For about 80% of business tasks, it is clearly enough what kind of neural network architecture should be used.

So we decided to integrate state of the art neural network architectures and theirs pretrained weights for different computer vision tasks such as: object detection, semantic segmentation, instance segmentation, text detection, OCR, classification and tagging.

And now user without any special knowledge can utilize the power of deep learning for wide range of real world tasks.

We introduce NN module. We added the ability:

  • to work with ready to use NNs from model zoo
  • to train them on your data right inside the system
  • to apply them (inference) on your images in a distributed manner and preview predictions
  • to deploy them as API services with a few clicks

Supervisely v2.0: DataTransformation Language (DTL)

Data scientists spend huge amount of time doing training datasets preparations. Routine tasks like datasets combining, class mapping, objects and images filtering and various data augmentation are automated in Supervisely.

We have Data Transformation Language (DTL) module which allows to define computational graphs in a simple json format. This configuration files used to tell Supervisely how you want process images and theirs annotations.

Example of computational graph

Generating training dataset with large number of augmentations is a common practice. Such datasets can contain millions of images and the process of generation may take hours or days.

How tasks are distributed in Supervisely v2.0

New version can be distributed not only between cores within one computer but also between multiple nodes in cluster. This improvement will help us to significantly decrease processing time from hours to minutes. It allows to make experiments quickly and cheaper.

Supervisely v2.0: Data Uploading

We have increased the number of supported public datasets to upload: Pascal, COCO, Mapillary, Cityscapes, CamVid, Davis and others

Now you can import all your data to the system and keep it in one format. We speed up the uploading process by adding hashing. If the image is already in our system we will not store its duplicates in the cloud storage. Thus for all public datasets the uploading time is fast. Just drag-and-drop directory with images and annotations and Supervisely will do the rest.

Supervisely v2.0: Collaboration

Many companies need to perform annotation at scale: it is when multiple users simultaneously annotate images. We made this feature public, so users can be created in one workspace and deal with shared data. Admin can set access rights for each user and monitor quality and productivity.

Example: users in one workspace, user activity stats

Supervisely v2.0: It’s more enterprise-ready than ever

To put everything together we had to make some changes to our technology stack.

We have moved all the data from our web servers to Google Cloud to strengthen safety and accessibility. Those who are looking for a self-hosted solution now have an option to store your images on already exiting S3-compatible cloud (or just keep data on bare-metal), available in Supervisely Enterprise Edition (EE).

Running a multiple data transformation tasks, neural networks training and inference is a challenging task: now we distribute tasks across workers with help of RabbitMQ and Kubernetes.

Docker is a handy solution when it comes to deploying software to the servers, but it also a great tool to standardize dozens of models from our Model Zoo: and we have architectures build on Caffe, PyTorch, Tensorflow (of course!). To keep this zoo in order, we implement a very simple API so that Supervisely can communicate with the model and pack all the necessary libraries into a docker image. Now we can connect our private registry and operate with image names — simple as that! And, by the way, you can too — in enterprise version you can connect your own docker registery and then train and run models right from the Supervisely interface.

Conclusion

Deep Learning is a crucial technology for all industries. Supervisely is designed to meet the needs of business. It covers end-to-end Deep Learning workflow at scale: manage data, AI powered annotation, train, evaluate and deploy custom Neural Networks.

PS. Due to high demand for new version, we have to limit free access to our GPU instances. So if you’re interested in trying out new features of Supervisel.ly, you can request access. If you will have any technical or general questions, feel free to ask in our Slack.

If you found this article interesting, then let’s help others too. More people will see it if you give it some 👏.