Visual aesthetics has been shown to critically affect a variety of constructs such as perceived usability, satisfaction, and pleasure. However, visual aesthetics is also a subjective concept and therefore, presents its unique challenges in training a machine learning algorithm to learn such subjectiveness.
Given the importance of visual aesthetics in human-computer interaction, it is vital that machines adequately assess the concept of visual aesthetics. Machine learning, especially deep learning techniques have already shown great promise on tasks with well-defined goals such as identifying objects in images or translating from one language to another. However, quantification of image aesthetics has been one of the most persistent problems in image processing and computer vision.
We decided to build a deep learning system that can automatically analyze and score an image for aesthetic quality with high accuracy. Please check out our demo to check your photo’s aesthetic score.
We came up with a novel Deep Convolutional Neural Network which can be trained to recognize an image’s aesthetic quality. We also came up with multiple hacks while training the algorithm to increase the accuracy.
In our paper published on arxiv, we have proposed a new neural network architecture which can model the data efficiently by taking both low level and high-level features into account. It is a variant of DenseNets which has a skip connection at the end of every dense block. Besides this, we also propose training techniques that can increase the accuracy with which the algorithm trains. These methods are to train on LAB color space and to use similar images in a minibatch to train the algorithm, which we call coherent learning. Using these techniques, we get an accuracy of 78.7% of the AVA2 dataset. The state of the art accuracy on the AVA2 dataset is 85.6% which uses a deep Convolutional Neural Network with pretrained weights on the imagenet dataset. The best accuracy on the AVA2 dataset using handcrafted features is 68.55%. We also show that adding more data to our training set (from AVA dataset not included in AVA2) increases its accuracy to 81.48% on AVA2 Test Set, hence showing the model gets better with more data.
App developers of social media sites can help their users decide which photo will suit best for their profile image. We all have faced anxiety while uploading photos on social media sites or changing our display pic. With our API integration, app developers can help their users look good, always!
2. Dating Apps
Ok, now this use-case may not appeal to the zen, non-materialistic folks among us but to be honest, dating leads to the most social anxiety. Dating landscape keeps changing as well and therefore, if you are active on dating apps, it’s important to choose your best photos to improve your chances for right swipes!
Dating App developers can easily integrate our APIs to help their users upload their best photos; the visual aesthetics model can also be fine-tuned if the developers want to optimise it on their data set.
3. AI enabled camera phones
Recently Google has launched Pixel 2 and Pixel 2 XL which has a portrait mode. This phone offers the portrait mode even though it lacks the second lens that many other phones have. For example, the iPhone X, Galaxy Note 8, OnePlus 5… all these phones offer the portrait mode because they use data from two lenses. One lens captures the image, the other one captures the depth information, apart from providing some focal range magic for the blurred background. However, Pixel phone uses AI to give HDR+ images to users which are comparable to pictures clicked by a DSLR camera.
Similarly, mobile manufacturers can augment the capabilities of their native camera by integrating the visual aesthetic APIs to let their users know in real-time the quality of their photo even before taking a snap! This will enable your users to share their photos with confidence and you will end up creating a great differentiator for your brand at no additional hardware cost.
4. Virality in online content
Content is king, and it has become ever more difficult to write compelling content that resonates with your audience. However, the best content these days often have great images to complement them, and therefore, you’ve got to include something that will keep eyes moving down the page.
BuzzSumo did an analysis that covered over 1 million articles and found that the ones that had images every 75–100 words had more social shares. Using our visual aesthetics tool, you can quickly check how appealing your images are and accordingly, improve the virality of your blog post. You can check the demo here.
In this blog post, we have covered some of the use-cases of our visual aesthetics API. When machines become more competent than humans to judge such subjective content, it opens up a lot of possibilities to exploit which were not feasible yet. You can read more blogs on Visual Analytics here.
Can you think of any other exciting use-case that can be solved using this technology? Please let us know in the comments below, and we will cover your use-case on our next blog along with giving you early access to use the technology.
Create your free account to unlock your custom reading experience.