paint-brush
Design Fair Markets by Using Algorithmsby@happybandits
221 reads

Design Fair Markets by Using Algorithms

by Arjan HaringJanuary 5th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Fairness and bias in machine <a href="https://hackernoon.com/tagged/learning" target="_blank">learning</a> is a blossoming line of research. Most of this work focuses on discrimination and inclusion. While this is an important line of research that I <a href="https://hackernoon.com/my-dearest-eliisabet-de724a86d105" target="_blank">wholeheartedly support</a>, I propose tech companies and platforms would also start working on “biased” algorithms that facilitate fair markets.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Design Fair Markets by Using Algorithms
Arjan Haring HackerNoon profile picture

Machine learning should fuel an inclusive society

Fairness and bias in machine learning is a blossoming line of research. Most of this work focuses on discrimination and inclusion. While this is an important line of research that I wholeheartedly support, I propose tech companies and platforms would also start working on “biased” algorithms that facilitate fair markets.

In this post I will argue that:

  • Bias and unfairness are pervasive in tech including machine learning
  • Fairness of markets can also be facilitated by machine learning

Unfairness is pervasive in tech

Try it out: type “He is a nurse. She is a doctor.” into Google Translate and translate it into Turkish. Then translate the result (“O bir bebek hemşire. O bir doktor.”) into English and you get “She is a nurse. He is a doctor.”



From: Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. Semantics derivedautomatically from language corpora contain human-like biases. Science,356(6334):183–186, 2017.

This is not only a feature of Google Translate, it is a feature of almost all tech including many algorithms. Rachel Thomas does a great job in her recent TEDx talk of summarizing the evidence that bias in algorithms is all around us. It is in facial recognition software (it still performs poorly for female and African American faces), voice recognition (YouTube automatically generates closed captions, but less accurate for a female voice) and this list goes on and on.

Rachel Thomas P.h.D. cofounder Fast.ai | TEDxSanFrancisco

The machine learning community is aware of these biases and through initiatives like the Fairness, Accountability, and Transparency (ACM FAT*) conference, Inclusion in Machine Learning and the Algorithmic Justice League we are adressing the negative effects of bias.

Technology can stimulate fair market exchanges

During a meetup last April I moderated on two sided markets and machine learning I met with colleague Yingqian Zhang and we got to discuss her work on fairness and machine learning in market settings. It didn’t take long or another discussion was planned with Yingqian (and Martijn Arets) on the fairness of automated decisions in the platform economy.

I highly recommend reading one of the papers of Yingqian (and her colleagues 1st author Qing Chuan Ye and Rommert Dekker) which deals with the cost of factoring in fairness in algorithms (spoiler alert: fairness seems to come with a very small price in the case they studied). It is based on a real case at the Rotterdam harbor where jobs were auctioned to move containers.

Foto credit: Bernard Spragg. NZ

The challenge offered by the harbor was the following: Trucks that come from the hinterland to drop off or pick up containers often have spare time in between tasks. Terminals could take advantage of these idle trucks by providing them with jobs. Different trucking companies can bid for those jobs, depending on their idle trucks at specific times. Given the bids of different companies, the terminals then decide on a best allocation of jobs to companies.

To meet the fairness criteria in task allocation Yingqian and her colleagues developed a polynomial-time optimal method consisting of two novel algorithms: IMaxFlow and FairMinCost. The output of these two algorithms is a max-min fair task allocation with the least total cost.

Of course this harbor case is not unique, it elicits questions that were common at my former employer Booking.com as well. For whom are we actually optimizing fairness for example?

  • are we optimizing fairness for customers?
  • are we optimizing fairness for suppliers?
  • or are we optimizing fairness for the platform?

At Booking.com we had numerous discussions before we implemented algorithms that would affect both sides of our market. This research would have been welcomed as input for shaping our arguments back then. For now it is a start. How exactly to go about creating a fair market and an inclusive society fueled by machine learning is something I want to discuss in a follow up post.

What I set out to communicate with this post was:

  • Bias and unfairness are pervasive in tech including machine learning
  • Fairness of markets can also be facilitated by machine learning

Let’s be clear, this is all rather new work. There are no clear answers. And like always more work has to been done. Not only by the research community, I want to explicitely urge industry data scientists to pick up this challenge along with the other challenges of applying machine learning in an inclusive and fair manner.

https://upscri.be/hackernoon/