paint-brush
Preventing Bad Actors from Overtaking Social Networksby@mrfahrenheit
103 reads

Preventing Bad Actors from Overtaking Social Networks

by Mister FahrenheitJune 1st, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In <a href="https://community.hackernoon.com/t/measuring-trust-and-reputation-in-social-networks/3080">my last post, about measuring trust and reputation on social networks</a>, I discussed the idea that nazis and other bad actors could be kept out of one’s own network, but they could always start their own, wreaking havoc all the same. They would have their own little echo chamber, where they could sway impressionable minds with propaganda and&nbsp;lies.

People Mentioned

Mention Thumbnail
Mention Thumbnail

Company Mentioned

Mention Thumbnail
featured image - Preventing Bad Actors from Overtaking Social Networks
Mister Fahrenheit HackerNoon profile picture

In my last post, about measuring trust and reputation on social networks, I discussed the idea that nazis and other bad actors could be kept out of one’s own network, but they could always start their own, wreaking havoc all the same. They would have their own little echo chamber, where they could sway impressionable minds with propaganda and lies.

This raises two questions. How do we keep nazis and other detestable groups from overtaking a network? And how do we solve the problem of propaganda and lies on social networks in general?

In case you missed them, you should read my previous posts:

I’ll start by saying, I don’t have all the answers here (I don’t believe anyone does, yet, not that I’ve heard). I’m merely exploring ideas, seeing what sticks and what hits the ground dead. And of course, to open these ideas up for discussion.

Let’s dive in.

First, the problem of nazis starting their own network, existing in relative isolation, lurking in the darkness until they reach a critical mass and overpower the smaller networks with sheer numbers (depending, of course, on how the networks operate and calculate consensus). How can we prevent these people from creating a harmful, dangerous network?

One way to do this is to truly implement the idea that “trust is gained in drops and lost in buckets”. We can make it impossible for nazis to know whether they’re talking to a fellow nazi, or the feds. In other words, we can’t stop them from trying to create their own networks, but we can make it incredibly difficult and uncomfortable (and we should).

I realize that in traditional social networks, centralized social networks, it’s as simple as having a “no nazis” policy. In practice, it’s turned out to be much more complicated, with Twitter notoriously refusing to ban nazis and other fascist filth for ages. As of the end of May, Twitter is apparently researching whether it should ban white supremacists. Spoiler alert: yes, you should.

In decentralized networks, things become a bit scrambled, by their very design. There are, however, a few ways to power through this problem. Let’s say you have an isolated network of nazis, all connected with one another.

One thing you could do is deny them services from the larger networks, effectively cutting them off from society.

Another approach would be to “out” them as nazis to their family, friends, and coworkers, by requiring them to give the keys to their identity to some elected officials in the larger macro-network (under Shamir’s secret sharing system, where the key is split up among many parties).

However, this defeats part of the purpose of decentralized networks. It becomes more of a federated network at that point (and maybe that’s a good thing).

These are reactive policies, though. How can we tackle nazis proactively?

A better solution would be to kill the problem at its root, by stopping propaganda and lies from spreading in the first place. How does one stop propaganda and lies?

In case you missed it the first time: I don’t have all the answers. We’re just exploring ideas here.

Now, one approach is to verify factual claims, i.e. true/false statements, with an external source such as the Associated Press. Of course, some people wouldn’t be happy with the source you choose, but those people can go elsewhere, to a Fox-powered network sponsored by Alex Jones and the Freakin’ Gay Frogs, perhaps.

Verifying factual information on the fly is tricky, though, in an automated system. What if it weren’t automated? What if humans verified facts?

This is a dangerous idea, because the nazis in this example could simply give each other the thumbs-up, and voila! They’re verified. The simple solution won’t work.

Back to the drawing board…what if external sources were a requirement? In other words, what if someone outside your immediate circles, as far away from you as possible, had to verify that your claims are true?

This, of course, is putting too much responsibility on individuals who just want to use the network. Maybe reputation and trust can be earned by verifying claims correctly, such that the vast majority of nodes in the network agree with you in the end.

I feel like what we need is a “bullshit button”. If something sounds like bullshit, you click the button. It’s low-overhead, there are immediate benefits from the verifier’s perspective (like not seeing bullshit posts), and you can eventually come to a consensus on what constitutes bullshit.

What does constitute bullshit? In the minds of many people, it’s simply something you don’t agree with. Maybe that’s enough for our purposes, but at the same time, it encourages people to build echo chambers, isolating them from ideas that challenge their limited perspective. It would be exceedingly difficult to get people to decide based on the facts (hence, the jury selection process).

Combined with a reward system, such that you get rewards in the form of trust and reputation for making correct assessments, maybe this system would work. I’ll have to think on it.

In the meantime, this has been Mister Fahrenheit, bringing you the latest in the intersection between tech and politics. Til next time…

Originally published at https://community.hackernoon.com on June 1, 2019.

<a href="https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href">https://medium.com/media/3c851dac986ab6dbb2d1aaa91205a8eb/href</a>