Algorithms and Society: A Not-So-Simple Discussion, In Three Partsby@eleanorbailey
626 reads
626 reads

Algorithms and Society: A Not-So-Simple Discussion, In Three Parts

by Eleanor BaileyMarch 5th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Algorithm has been pulling the strings of our daily life for a long time now. The effect of the Algorithm on our decision making starts from the moment we start engaging with the digital world. The human decision-making faculty is the fundamental part of the social, economic and political fabric of life. The Algorithmic Revolution will start from small decisions to make bigger decisions, such as what kind of news will be relevant to us and the content we would find relevant on social media. We don’t know when the Algorithms will start to curb the voices of the minority.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Algorithms and Society: A Not-So-Simple Discussion, In Three Parts
Eleanor Bailey HackerNoon profile picture

Image Source: Unified Infotech

Have you ever wondered about how you can see the most relevant posts on the top of your Facebook timeline, or get those amazing offers from your favorite restaurants?

If you have and are still looking for the answer, it is Algorithm.

This basic tool for data analysis and then providing proper output has been pulling the strings of our life for a long time now. But how is this technical puppeteer affecting our daily life?

And how exactly is that preventing everyone’s voices from being heard? Let’s find out. 

Part I: Is Algorithm Taking Away Our Decision-Making Faculty?

From the moment we wake up to the moment we go to sleep, we make around 35,000 remotely conscious decisions. That is a huge number of decisions and each one has either good or bad consequences. Now how many of these are affected by Algorithm?

The effect of the Algorithm on our decision making starts from the moment we start engaging with the digital world. Whether it is about choosing what to have for breakfast, or deciding whether you’d take the train to go to work or get an Uber, Algorithm is affecting our decisions. And the fun part is that we are letting it. 

a. From Small To Big: Algorithms Making All The Decisions For Us

It is easy to delegate the decision making part to the so-called unbiased, infallible system that is Algorithm.

It will process the nutritional data and let us know that we should have a healthy breakfast, and check the traffic data to let us know that taking a train is going to be a faster way of getting to work. 

Starting with these small decisions, it will make its way into making bigger decisions, such as what kind of news will be relevant to us and the content we would find relevant on social media.

Even though the above-mentioned way of life seems easier, it has larger implications. The human decision-making faculty is the fundamental part of the social, economic and political fabric of life.

Out of all the creatures on this planet, it is only us humans who have the ability to make decisions that are unrelated to our basic needs in life.

b. The Algorithmic Revolution Will Start From Smaller Decisions

Even though decision making is a fundamental ability and right, today the user is voluntarily removing themselves from the decision making process. And when this happens, the question we really need to ask ourselves is this-

Is the Algorithmic system taking away our ability to make choices?

While this question seems like overkill, it still remains valid.

Thanks to the smart device era, we don’t really have to make the small decisions such as when to order detergent or what temperature should I set the thermostat at.

But, how long till the Algorithm starts to make serious decisions for us? Like which candidate to vote for or whether a convicted person really committed the crime or not?

There is definitely something wrong with the way the Algorithm is taking over our decision-making faculty. And if you are wondering why then keep reading to the next part. 

Part II: How The Algorithm Is Creating Selective Perception And Curbing The Voices Of Millions

Let’s begin with a trivial example. 

Recently, according to an experiment conducted by Ben Berman, a game designer in San Francisco, showed how popular dating apps like Tinder, Bumble, Hinge and many more are using collaborative filtering to generate matches according to a majority opinion. 

This means that once you register on a dating app, the Algorithm will use your data, and show you the matches approved by people with similar data like you.

If you happen to be a 5’4” tall blonde woman who loves hiking, living in New York, then you will see the similar matches other women with similar data like yours.

a. The Question And Problem Of Majority Opinion

The question this example creates is how long until this collaborative filtering is applied to other aspects of life, such as who is more in need of that life-saving medical treatment? While the Algorithm tends to the majority opinion, what happens to the opinions of the few?

The problem is that we don’t know when the Algorithm Black Box will start to curb the voices of the minority to tend to the majority opinion. We have already seen instances where medical AI has already discriminated against black patients when it comes to getting proper medical treatment. What if at a later stage, the Algorithm starts to decide who’s voice and opinion is worth hearing and whose isn’t?

b. Will Society Be Shaped By The “voices” Selected By The Algorithm?

We have already seen to what extent the online world affects the real world. Whether it is fake news spreading terror among the people, or malicious opinions influencing the mass. The virtual world has some real power over us in the real world, and it can turn ugly if the Algorithm messes up. 

This is not only limited to the opinions expressed online and how it will affect us. Algorithms are also deciding who to hire for a job based on their resume AND their voice, it is also deciding who gets life-saving treatment and who doesn’t.

It is a convenient system, especially when a team of 10 HR members doesn’t have to go through hundreds and thousands of job applications. 

But it does put the fairness of the system into question. How can a set of Algorithm sift through the data decide who is worthy? 

c. The Real Problem Might Not Be Algorithm, But Something Else

The effects of Algorithm seem a little sinister till now, but the actual problem and solution both lie somewhere else. At the end of the day, the Algorithm is a system designed by human beings, and the data it uses is affected by human actions. 

Since it is being designed by human beings, we can safely assume that it will follow the thoughts and ethics of the designer. Added to that, a poor representation of diversity in the institution will have a serious impact on people and how their voices are being heard. 

Without any standardized rules about the bias in Algorithm design, the system will continue to be weighed down by the bias and judgment of the designer. 

d. The Second Problem Will Always Be Data

The age where Data is fueling almost every system, those who have the most useful collection of data will win the game.

From Google, and Facebook, to online shopping platforms, each has huge sets of data on us that can be used for almost everything. The problem, however, remains is who is going to act as the gatekeeper and make sure that the data is used properly?

The raw data can reveal patterns that will ultimately lead the Algorithm to mute the voices of millions. It is because the data is affected by human behavior and therefore reflects the human bias.

So how do we prevent the Algorithm from learning these biases and censoring those whose opinion needs to be heard?

Part III: The Solutions For The Sinister Scenario

While all this sounds very sinister, there might be a few solutions to make sure that the Algorithmic systems provide us with fair and unbiased judgment.

a. Legal Structure Might Be The Answer

Having a standardized legal structure in place for safe use of data might be the first step towards creating a safer and fairer Algorithmic system. Without any rules in place for how the data can be used, there is sure to be digital anarchy. 

b. Choosing A Gatekeeper

While having a legal structure might be the first step, we also need someone to enforce it. And that’s why we have to choose a gatekeeper who will be able to prevent misuse of data, creating biased systems and enforce the legal structure. 

While in most places tech companies themselves are working as gatekeepers, in regions such as Europe, the government is working as the gatekeeper for the public data. 

c. Representation Matter More Than Ever

Another way of removing bias from Algorithm data will be to have diverse representation among the staff. This way the voices of all demographics will have a part in the design and deployment of the Algorithmic system. 

d. Not Repeating The Mistakes Of The Past

There needs to be made some checks when it comes to the use of historical data. Incorrect implementation of the historical data can lead to the Algorithm repeating the same mistakes from the past.

To prevent this from happening, it is important that the computational and data experts at work seek and implement advice from experts of other domains such as sociology, cognitive science, behavioral economics.

This way they will be able to understand the unpredictable human brain and its various dimensions and create an Algorithm that will be able to predict context before spitting out an outcome.

e. Providing Transparency

Last but not least, the organizations that are collecting and using the data to create algorithmic systems should provide a clear and transparent policy. Sure they might not let us know exactly how the Algorithm works, but detailing the limits or the Algorithm will go a long way for us to know exactly to what extent our data is being used and for what is being accomplished with it.

So, What's The Bottom Line? 

The fact remains that our data can be used to accomplish everything, from determining what kind of illness we have to when we are most likely to take retirement.

The problem, on the other hand, is the biased use of the data which helps to promote some voices and profiles more than others. And even though the scenario seems very sinister, there are many ways to solve this.

The only thing that is lacking is the human conscience and unbiased outlook. Once we achieve that, Algorithm would neither be able to make our decisions for us nor will it shut down the voices of those which need to be heard.