Machine learning should fuel an inclusive society Fairness and bias in machine is a blossoming line of research. Most of this work focuses on discrimination and inclusion. While this is an important line of research that I , I propose tech companies and platforms would also start working on “biased” algorithms that facilitate fair markets. learning wholeheartedly support In this post I will argue that: Bias and are in tech including machine learning unfairness pervasive can also be by machine learning Fairness of markets facilitated Unfairness is pervasive in tech Try it out: type “He is a nurse. She is a doctor.” into and translate it into Turkish. Then translate the result (“O bir bebek hemşire. O bir doktor.”) into English and you get “She is a nurse. He is a doctor.” Google Translate From: Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan. Semantics derivedautomatically from language corpora contain human-like biases. Science,356(6334):183–186, 2017. This is not only a feature of Translate, it is a feature of almost all tech including many algorithms. does a great job in her recent TEDx talk of summarizing the evidence that bias in algorithms is all around us. It is in facial recognition software (it still performs poorly for female and African American faces), voice recognition (YouTube automatically generates closed captions, but less accurate for a female voice) and this list goes on and on. Google Rachel Thomas Rachel Thomas P.h.D. cofounder Fast.ai | TEDxSanFrancisco The machine learning community is aware of these biases and through initiatives like and the we are adressing the negative effects of bias. the Fairness, Accountability, and Transparency (ACM FAT*) conference, Inclusion in Machine Learning Algorithmic Justice League Technology can stimulate fair market exchanges During a meetup last April I moderated on I met with colleague and we got to discuss her work on fairness and machine learning in market settings. It didn’t take long or another discussion was planned with Yingqian (and ) on the . two sided markets and machine learning Yingqian Zhang Martijn Arets fairness of automated decisions in the platform economy I highly recommend reading one of the papers of Yingqian (and her colleagues 1st author and ) which deals with the cost of (spoiler alert: fairness seems to come with a very small price in the case they studied). It is based on a real case at the Rotterdam harbor where jobs were auctioned to move containers. Qing Chuan Ye Rommert Dekker factoring in fairness in algorithms Foto credit: Bernard Spragg. NZ The challenge offered by the harbor was the following: Trucks that come from the hinterland to drop off or pick up containers often have spare time in between tasks. Terminals could take advantage of these idle trucks by providing them with jobs. Different trucking companies can bid for those jobs, depending on their idle trucks at specific times. Given the bids of different companies, the terminals then decide on a best allocation of jobs to companies. To meet the fairness criteria in task allocation Yingqian and her colleagues developed a polynomial-time optimal method consisting of two novel algorithms: IMaxFlow and FairMinCost. The output of these two algorithms is a max-min fair task allocation with the least total cost. Of course this harbor case is not unique, it elicits questions that were common at my former employer Booking.com as well. For whom are we actually optimizing fairness for example? are we optimizing fairness for customers? are we optimizing fairness for suppliers? or are we optimizing fairness for the platform? At Booking.com we had numerous discussions before we implemented algorithms that would affect both sides of our market. This research would have been welcomed as input for shaping our arguments back then. For now it is a start. How exactly to go about creating a fair market and an inclusive society fueled by machine learning is something I want to discuss in a follow up post. What I set out to communicate with this post was: Bias and are in tech including machine learning unfairness pervasive can also be by machine learning Fairness of markets facilitated Let’s be clear, this is all rather new work. There are no clear answers. And like always more work has to been done. Not only by the research community, I want to explicitely urge industry data scientists to pick up this challenge along with the other challenges of applying machine learning in an inclusive and fair manner. https://upscri.be/hackernoon/