paint-brush
5 Insights About Digital Social Responsibility in light of Facebook & Cambridge Analyticaby@jakestanley
252 reads

5 Insights About Digital Social Responsibility in light of Facebook & Cambridge Analytica

by Jake StanleyApril 10th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If you haven’t already, check <a href="https://www.facebook.com/help/1873665312923476?" target="_blank"><strong>this link</strong></a> to find out if your Facebook data was shared with Cambridge Analytica.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - 5 Insights About Digital Social Responsibility in light of Facebook & Cambridge Analytica
Jake Stanley HackerNoon profile picture

Communities have always been positively and negatively affected by the actions of individual members. So why do people behave differently in the digital world?

If you haven’t already, check this link to find out if your Facebook data was shared with Cambridge Analytica.

Here are my results:

Yep. I was 1 of the 87 million people whose data was shared with Cambridge Analytica, as a result of a controversial Facebook App called “This is your Digital Life”. Interestingly, I personally never used this app. My only error? Being friends on Facebook with someone who did. As you can see in the screenshot, my friend’s participation authorized the sharing of my public profile, page likes, birthday and current city.

Debating the implications, consequences, and whether or not this controversy actually caused me any harm is best for another article. For now, I’m interested in digging into some potential factors that contributed to my friend thinking it was socially acceptable to share my birthday and zip code with a random developer. Wouldn’t this same transaction of information have seemed strange in real life?

Photo by Brian Solis

Would you give away a friend’s birthday or location when chatting face-to-face with a stranger? Probably not. So what’s different? Here are 5 potential factors:

1) Digital Consequences aren’t obvious

My friend who used this app certainly wasn’t doing anything malicious. When using an application like this, Facebook alerts you to the items you will be sharing, but lacks the possible consequences of what could be done with this information. Without proper eduction, how is an average person expected to weigh the pros and cons, all while battling the strategic UI luring them to click “continue”? Whose role is it to educate? Facebook certainly has no reason to slow you down. Facebook did not include the names of my friends who resulted in my data being shared. But what if they did? This would certainly be embarrassing, but added transparency, and potential public repercussions would have certainly incentivized a second glance before clicking “authorize”.

2) Digital Consequences are deferred

We’ve convinced ourselves that if consequences do occur from our online actions, they most certainly will happen far off in the future — a month, a year, decades from now . It’s an ignorant justification and why we’re also to blame for @CamAnalytica. From my experience, consequences that do not quickly follow the initial decision are less likely to deter that behavior in the future. Scolding my dog days after I’ve discovered she has chewed something up rarely leads to improved behavior.

3) Consumers undervalue data

Mark Zuckerberg has been very successful at building a data-gathering machine. Every keystroke and behavior is analyzed and converted to a marketable and monetizable asset. Facebook’s Terms of Service states that you own your data, but who owns these new insights and observations obtained from your data? Your clicks, scrolls and content are worth something. That number is much higher than $0. How much? Robin Bloor does a great job of trying to quantify the value of your data in this article.

4) Consumers underestimate the scalable nature of the internet

When we offer up information in the physical world, we first run some simple math in our heads. “What’s the worst that could happen if I give this salesman my phone number?” We calculate the potential damage that could be caused by this person (based on realistic human capabilities), weigh the pros and cons, and make a decision that is in line with our risk tolerance. However, when computers are involved, the average consumer vastly underestimates the speed, scale and reach of automation-assisted systems. To put it in perspective, readily available cloud resources can process 1,000+ records per second. I’m sure some of you have even more powerful examples. Authorizing an app to access my friends list seems harmless independently. Yet, when mixed, matched, and overlaid with other data sets, a list of page likes can yield scary results. Check out Georges Abi-Heila’s article “Your Facebook Data is Scary as Hell”.

5) Limitless number of “friends”

Digital communities aren’t a “fixed pie”. In the physical world, personal circles are dictated by the number of relationships a person can juggle, whereas in the digital world, personal circles are only limited by database capacity. For example, I’ve found that as I move to a new town or job, making new friends almost always leads to me to drop older friends out of my community. After all, it’s only humanly possible for me to manage a certain number of friendships. “Dunbar’s Number” states that “people can only manage relationships with 150 friends at a time”. It seems we implement some version of a FIFO method of keeping our casual acquaintances to a manageable size. Furthermore, physical relationships require two-sided attention. If one party stops participating in a friendship, the relationship fades. Yet this doesn’t seem to exist in the digital world. I don’t actively contribute to 1000+ friends in my Facebook friends list, yet keeping them as friends apparently represents some level of risk. Smaller, physical communities function well because there is a set of norms, expectations, and a tangible understanding of how individual decisions may negatively affect all members of the group. Yet, unwieldy social communities seem impenetrable. Surely, poor decisions couldn’t affect all these people? LinkedIn apparently enforces a 30,000 connection limit. I wonder their reason for this specific number? Would a fixed community limit on social platforms that more closely reflected our real-world help?

What do you think? Do we share the blame with Facebook, or should they have prevented this in the first place? How can we create better incentives or regulations to equip people to make better online decisions?


If you enjoyed this, feel free to clap 👏Follow me on Twitter, or check out what I’m working on.