paint-brush
How Deep Learning Makes Fashion Smarterby@allisonzhao
2,035 reads
2,035 reads

How Deep Learning Makes Fashion Smarter

by Allison ZhaoNovember 13th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Today’s fashion world is flooded with advertisements, videos and images. Within a few taps on our phone, we can instantly gain access to the backstage of Victoria’s Secret fashion show or a sneak peek at Zac Posen’s latest collection. For the longest time, magazine editors have been the fashion tastemakers and recently fashion bloggers and Instagram influencers have joined the fray. But the door is opening to more than just online personalities — <a href="https://hackernoon.com/tagged/deep-learning" target="_blank">deep learning</a> could be setting at least some of tomorrow’s trends.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - How Deep Learning Makes Fashion Smarter
Allison Zhao HackerNoon profile picture

Today’s fashion world is flooded with advertisements, videos and images. Within a few taps on our phone, we can instantly gain access to the backstage of Victoria’s Secret fashion show or a sneak peek at Zac Posen’s latest collection. For the longest time, magazine editors have been the fashion tastemakers and recently fashion bloggers and Instagram influencers have joined the fray. But the door is opening to more than just online personalities — deep learning could be setting at least some of tomorrow’s trends.

When Thread Genius (now acquired by Sotheby’s) was introduced to me by my good friend, Ahmad Qamar, I was pleasantly surprised by the fact that deep learning technologies have reached the world of fashion and art, fields in which I have always been interested. Fashion trends are known to be unpredictable; what was considered out-of-style in the past decade could suddenly become this year’s hottest trend. How do neural networks interact with such unpredictable and dynamic data from the worlds of fashion and art? What impact will deep learning and neural networks make on the fashion and art industries? What exactly are deep learning and neural networks? Earlier this week, I had the pleasure of doing a small interview/Q&A with my friend Ahmed, a highly-experienced machine learning and deep learning engineer, to learn more about how deep learning is changing today’s fashion world. Here is what I took away from our conversation and I hope you find this article somewhat informative and a fun read :)

Explain to Me Like I’m 5: What are Artificial Neural Networks?

As a primer for the technicalities of neural networks, I suggest reading the first few paragraphs of this article published by Condé Nast’s tech blog: https://technology.condenast.com/story/a-neural-network-primer

Traditionally, data analysis works with numbers, graphs, charts and simple math models. However, when it comes to unstructured data like images, languages and texts, traditional data analysis can become challenging. Spotify, as an example, is sitting on an enormous amount of data about playlists and listening behaviors, and these data are not directly numerical. This is where neural networks come in — they allow us to build models on unstructured datasets. Back in the old days, it was done with hand-crafted features. For example, if you want to teach a computer program to distinguish cats from cars, you would maybe look for round objects like car wheels, straight lines for car body vs. fuzzy lines for cats. Sometimes that works, but it is very error prone and works only half of the time. Handwriting detection is another good example — OCR breaks apart when you look at human writing instead of machine typed texts. Overall, it was a very involved process with not-so-ideal results.

Neural Networks are universal function approximators. Because neural networks can approximate any functions, input and outputs can be unstructured. Modern neural networks do jobs the way humans would do, especially with feature extraction: looking at the task at hand and trying to define specific features that are unique to that task. Neural Networks are trained.

How Are Neural Networks Trained? Like Pokémon?

To start off, you need training data that consist of many samples and labels (except for the case of unsupervised learning). Typically, you need 1000 samples per concept at the very least, which is actually a limitation for neural networks and a barrier of entry, since quality data is usually expensive and hard to come by in large amounts (a good example might be cancer detection from blood samples). Once you have the training data, you can show the model an image of a car and its label, then the model will adjust its parameters so that it gets closer to making the correct prediction, utilizing a technique called stochastic gradient descent. Each time you show the model an image, the model slightly adjusts its parameters. Training is completed when the model sees all of the data points on the order of a hundred times each (suppose you are training the model with 10,000 images, then the model is supposed to see these 10,000 images each 100 times). Each time is called an Epoch. At the end of the process, the model would have tweaked its knobs enough that it will likely classify all of the training examples correctly, and that’s when you are ready for the test phase. You can test the model on completely new samples that the model has not seen before to examine its accuracy.

OK, now I have an idea of what it is, tell me where its applications are in real life.

Broadly speaking, you can see deep learning’s application in any domain that deals with images, such as interior design, fashion, fine arts and photography. The medical field also has images of many forms — X-rays, MRIs, ultrasound, blood samples and it is such an exciting idea to teach machines to come up with diagnosis. Machines, ideally, are capable of coming up with more accurate diagnosis than those from human doctors because they are not confined by human limitations. They are less biased and capable of learning from a much larger amount of data. Same thing applies to the art domain — art historians normally specialize in different types of media or genres, but machines can cover a much wider breadth of data(check out this awesome Thread Genius art demo). Another good application may be satellite imagery recognition — for researchers that study population density or deforestation, they no longer require human labor to go through satellite images and manually make marks; hedge fund managers can use visual search to predict commodity prices (e.g. looking at parking lot utilization through satellite images to predict Chipotle’s business performance) The advantage of machines is scale, and these are just applications on images. For audio, being able to understand music genre, speech to text (natural language processing, only now people are using deep learning to improve the accuracy) are also great examples.

That’s Amazing! What About Fashion?

What Thread Genius have built is a neural network that detects around 1000 fashion concepts such as color, product details, embellishments and patterns etc. For a given image, TG’s neural network can pinpoint where the different products are in an image. Lastly, TG has functionalities around visual search, where you can query with an image and find other images that share similarities. In fact, this is actually different from reverse image search — RIS typically compare raw pixels and has no understanding of concepts. If you feed it with an image of Kanye West with a coat, the results will probably be other pictures of Kanye, whereas Thread Genius can give you the near exact coat that Kanye is wearing. Neural networks have an understanding of the concepts of what a coat consists of and have the capability of finding similar images, even if the coat is dirty or sideways/backwards. Neural networks have a conceptual understanding.

Here are some examples of applications: 1) A retailer that wants to provide customers similar products when it’s out-of-stock; 2) A marketer that wants to provide UGC photos of products. (e.g. Instagram/Pinterest photo in the product listing; #asseenonme on Asos.com). In fact, a customer is 3 times more likely to buy a product when shown a UGC photo. This is called contextualizing products — for instance, making Instagram shop-able. Along the same lines is understanding users’ aesthetic preferences to improve advertisement — imagine Nordstrom allowing you to sign in with your Instagram account so that it can gain a better sense of your fashion style and build visual taste profiles and provide customized products recommendations for you based on that.

TL; DR:

Deep learning and neural networks are making fashion smarter by providing a powerful, hight quality visual search engine. These technologies are extremely beneficial for retailers, marketers and fashion tastemakers to better understand their audience and directly improve sales. For customers and users, deep learning can help us figure out how to wear certain pieces of clothing by showing us millions of images of how other people wear it. The possibilities and opportunities are endless, and I am more than excited to see how it will evolve in the future.

Check out Thread Genius and their awesome product here.

**Update: Thread Genius is now acquired by the world-famous art dealer company — Sotheby’s. Congrats Ahmad Qamar and Andrew Shum! https://techcrunch.com/2018/01/25/sothebys-acquires-thread-genius-to-build-its-image-recognition-and-recommendation-tech/

Thread Genius on Medium: https://techburst.io/@ThreadGenius