Emma Rosenberg

@emma.rosenberg

What designers need to know about machine learning

Machine learning isn’t science fiction, or another industry buzzword. It’s already being widely used as part of features such as Netflix’s recommendations tool, spam filters in email and messaging apps, image recognition on sites like Facebook, and many more, allowing products and services to ‘learn’ over time and adapt accordingly. This change in the way products and services are engineered is now beginning to change the way we conceptualise and design digital products and services from the outset. So, for us designers, it’s time to start getting our heads around how this technology impacts the design and user experience of the things we work on, and how we can use this technology to create new and better products for our users.

How is machine learning different from the technologies we currently work with every day?

What we’re used to working with in most computer systems is boolean logic. That is, every expression built into our software can ultimately be described as ‘true’ or ‘false’. This allows a program to be built as a series of concrete steps, like a ‘choose-your-own-adventure’ book, so that we can build and test it, and we’ll always know what will happen next, according to the rules we’ve built into it.

Machine learning is different. Rather than using a set of ‘true’ and ‘false’ rules to define the program’s behaviour, a machine learning system looks for patterns within a set of existing data in order to produce an approximate representation of the rules that created the data in the first place. This means that while conventional boolean logic programs are always correct about concrete mathematical equations for example, machine learning systems are often correct about much more complicated things.

Why is this useful? Well, we can’t program everything with boolean logic for starters. Identifying a human face, for example, is a too-complex problem for boolean logic to handle. What’s also interesting about this approach is that it’s actually somewhat like our own mental process for learning about the world around us. We learn to make sense of the world by seeing patterns in things we experience repeatedly over time, from childhood all the way through our lives.

Consider the process of recognising a face. You’ve done it millions of times — probably almost every day of your life, in fact. But it’s difficult to express the steps you’d take to do it. There’s a huge amount of variation in faces, a face can be seen in an infinite number of lighting conditions, at any angle, any place. The proportions of a face vary depending on where we stand in relation to it. The set of rules that a program would use becomes much too complex to describe using true/false logic.

How machine learning ‘thinks’ — deductive vs inductive reasoning

In deductive reasoning, we start out with a theory about why something happens, develop hypotheses, gather observations or data and use this to test our hypotheses, then proving or disproving our original theory. In inductive reasoning, we start with a group of observations/data, then look for patterns in this data. We then use this to formulate tentative hypotheses, and then try to produce a general theory that includes our original observations/data. Machine learning systems can be thought of as ways to automate or assist inductive reasoning processes.

The difficulty of an inductive reasoning problem depends on the number of relevant and irrelevant attributes involved, as well as how subtle or interdependent the attributes are. Problems like recognising a human face involve a huge amount of interrelated attributes and a lot of noise. Machine learning systems are able to automate the process of synthesising general knowledge from large sets of information.

Using complex information and new types of input

So how we can use some of these new tools? Well, designers should consider that with machine learning, our systems are able to understand complex information from a large array of sources. For example, we can now recognise spoken language, facial expressions and objects in photographs, meaning we can start to think beyond clicks and swipes, to other ways for users to interact with our products. For example, you can upload a picture of an item you’re shopping for and ask the system to find the nearest match, or tell you what a particular item is called. Aural inputs similarly allow you to interact with a system by speaking to it. Inputs from the body, like facial expression, or inputs collected via a device like the Apple Watch or Fitbit can now be used as an interaction method or a way to assess the state of a user at a given time.

In addition to these new kinds of inputs, machine learning allows us to discover patterns within user behaviour at a scale greater than ever before, and also use this data to predict how our users might behave in the future. We can use this information to design better products and services, and to understand in greater depth how our interfaces are currently being used, who our customers are, and design new types of features based on our ability to predict user behaviour.

In the way that recommender systems can suggest music or movies based on the similarities of users’ tastes, the ability to detect patterns across large groups of users can be used to suggest relevant features to a user based on the existing behaviour the user already demonstrates. And as learning systems become more and more common in the products we use every day, user behaviour will also begin to adapt to these systems at the same time as the systems adapt to user behaviour.

reCAPTCHA is used to improve the performance of Google’s image recognition system by asking users to identify objects in photographs, helping to improve performance of the image recognition system

Types of learning

Supervised learning: When we can provide our machine learning system with example inputs as well as outputs, we use these to ‘train’ the system by telling it explicitly what correlations can be found, and asking it to substantiate these correlations. For example, we can use supervised learning to predict house prices, by providing the system with historical data of all house prices in a certain city, for example. Then, once the system understands the relationship between the price of the house and the other attributes of a house according to the data, the system can then predict future house prices if we ask it to look at an unsold house with some characteristics, eg number of bedrooms, location, size.

Unsupervised learning: We ask the machine to discover patterns within a set of data, without telling it to look for particular correlations. This can be used to discover patterns within data sets or systems which are too complex for humans to figure out on their own. Unsupervised learning can learn the stylistic patterns in a composer’s work and then generate new compositions in that style. It can also be used to improve the quality of supervised learning.

Semi-supervised: Here we can used the ‘discovery’ capabilities of unsupervised learning to improve predictions of a supervised learning problem.

Where can we use it?

Machine learning is already being used across many industries, but in many cases, we’re at the very beginning of applying these technologies.

McKinsey Global Institute, December 2016 — ‘The Age of Analytics: Competing in a Data-Driven World’
Many organizations focus on the need for data scientists, assuming their presence alone will enable an analytics transformation. But another equally vital role is that of the business translator who serves as the link between analytical talent and practical applications to business questions. In addition to being data savvy, business translators need to have deep organizational knowledge and industry or functional expertise. This enables them to ask the data science team the right questions and to derive the right insights from their findings.

For designers, there’s an opportunity to be a ‘business translator’ — bridging the gap between the technological capabilites and the practical applications of the technology for the business and for the end user.

Some of the types of problems we can think about using machine learning to solve:

McKinsey Global Institute, December 2016 — ‘The Age of Analytics: Competing in a Data-Driven World’

How to use machine learning as part of a design process

We’ve touched on a few of the capabilities of machine learning systems, but there are many others, too many to go into here. But regardless of the type of system or functionality you wish to employ, try to explore the machine learning problem in as much detail as possible, in order to define the parameters of the problem you’re trying to solve with machine learning. Consider the inputs you’ll be able to provide to the system, and the outputs you’d like to receive (for example, an input could be an image and an output could be a link to a product, or a description of the image). At this point, if you understand the underlying principle of the machine learning technique you’d like to use, you can treat the learning system as a component piece of your design, with the technical details fully established later while you sketch out the design as a whole.

It’s important to collaborate with a machine learning programmer or data scientist whenever possible, as their understanding of what’s possible and how or if your vision can be achieved will be invaluable throughout the design process. Work together to prototype ideas and validate assumptions. Designers have a part to play in developing new uses for existing algorithms and tools and intuiting where a technology can be used to create a better user experience, or create an entirely new product. Ask questions, and find out when your assumptions are incorrect — it’ll help you to reach a level of understanding where you can start to incorporate machine learning ideas into your everyday process

What next?

Machine learning is changing the face of what’s possible in design. We can use machine learning systems to make sense of human behaviour and make predictions based on this data. We can also use the tools that machine learning has made possible, like image, face, and voice recognition, recommender systems, and many more. The impact of these tools is still to be determined — but the more designers understand and utilise the technology in their daily work, the better the end products will be. Experimentation will be an invaluable part of the process of discovering how we can make use of machine learning now and in the future, and understanding machine learning will enable designers to build bridges between technology, business needs and the end user.

There’s so much more for designers to learn about this — the surface has only just been scratched here! This book is an invaluable resource for understanding more about machine learning (and was an invaluable resource for writing this article), and you can learn more about how to work with machine learning algorithms on Coursera here, and read the full McKinsey report here.

Hacker Noon is how hackers start their afternoons. We’re a part of the @AMI family. We are now accepting submissions and happy to discuss advertising & sponsorship opportunities.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!
Topics of interest

More Related Stories