I feel crippled and disheartened. Why would somebody take the time out of their day to berate you and twist your words? How is it even possible that somebody I don’t even know could cause so much hurt, and sting my day?
I’ve been there, you’ve been there. None of us have time to let it get us down, haters gonna hate, so why should we care? Yet we do. It is our human nature no matter how rational we try to be: We’re social creatures.
You, me, us. We. So what are we going to do about it? If this is a prevailing biological feature, how can we build and design fool proof systems that get around this? How can we re-engineer our social structures to accommodate our emotional needs?
Ladies and gentlemen, I’m proud to announce a new kind of social network: An emotional intelligence network. Here is how it works, and why we’re building it.
As a teaser for what is ahead, imagine taking the software that video games and animated movies use to record actors’ facial expressions, and use that to replace Skype? The point of video conferencing is to communicate our subtle emotional intent, yet the collateral damage is to clog our bandwidth with wasted pixels.
If we could encode and decode our emotions, as can be done with the Open Source tool pictured, we could save 99% of the bandwidth yet preserve 99% of our intent. Something as simple as this can go a long way, and quickly bursts into a euphoria of new ideas. But first, we must start with the basics, and build up.
Despite how manipulative Facebook may be, they did get “likes” right. This should be considered the most basic level of an emotionally intelligent network. If you disagree with this point, you are going to have a hard time digesting the rest of the article.
Here is the theory, positive reinforcement is more effective than negative reinforcement. If you do not like something that somebody says, do not waste time trying to drag them down. Your interaction with the person is only going to draw more attention to their ideas, which unintentionally helps spread it. The best way to kill an idea is to ignore it.
What about censoring it? Hopefully history has already answered this one. Censorship is bad because it causes psychological reactance. Which in turn causes people, even with truly terrible ideas, to become enflamed with conviction that they are ever more right than before. They now have a fighting cause to die for. And people will, even for stupid ideas.
So how do we combat stupidity? By celebrating good ideas. Positive reinforcement encourages (pun intended) us to vocalize and amplify ideas that we think are worthwhile. We are herd animals, and due to our social nature, we become subliminally brainwashed by culturally reinforced ideas. It is not easy or natural to be positive, but it is effective.
For instance, Coca-Cola (open happiness!) spends millions of dollars every year to reinforce your brain (well, that is, after it became illegal for them to slip cocaine into your drink). If they could not drug your body, they figured out how to drug your psychology.
The ability to only upvote, where it is as easy as a finger poke but difficult to down vote, is only the beginning of how we retrain our brains. It draws attention to good content, and ignores bad content — a signal to noise ratio.
However, it does have the side effect of forcing haters into the comments. We’ll address that in the next section.
Hopefully the simple amount of effort it takes to comment versus “like” will weed out a significant number of haters, yet simultaneously boost the total positive count. That aside, we all know (if we’ve been anywhere near the internet) that some despicable person or troll will take up the challenge.
Their thirst for the sadistic satisfaction of ruining your day is like the lust of blood for vampires. So how do we prevent this? First, let it be clear that we are not trying to discourage dissent. We are trying to facilitate discussion by pushing it into the comments, rather than creating silent voting wars of “my side” versus “their side”. That leads no where.
Now, secondarily, just because there are comments does not mean that it will be a quality discussion. Far from it, one bad apple can ruin everything. So our goal should be to weed out those bad apples. However, without the ability to downvote (or “report abuse”, since this highly subjective and will itself be abused), how can we identify trolls in text?
An ‘emotional intelligence network’ will be fully peer-to-peer, for reasons to be explained later. As a result, we can have machines assist us in identifying trolls through sentiment analysis with personalized rules that we can decide. This acts as the “down vote” but prevents abuse because it is individualized.
Sentiment analysis attempts to calculate how positive or negative a comment is. It does not care about political view, religious affiliation, or any other meaningful opinion. That is to say, it takes no sides in an argument, and has no echo chamber effect. Instead, it can loosely approximate if somebody is a hater or not. But there are many nuanced considerations, like the following:
If there is one phrase that should stick with you, let it be that emotional intelligence is a reflection of the self. When we look at ourselves from an outside perspective, we have to reconcile whether that is who we want to be. This not only expands our theory of mind, the capacity to empathically relate to others, but it also lets us mirror new behavior — breaking bad habits, or choosing new ones.
The biggest possible mistake we could make is to hand that mechanism over to someone else or something else. There is no better machine learning algorithm or artificial intelligence than your own ability to positively reinforce yourself.
If the pantheon was the gods of the Greeks, moderators are the gods of the forums. And like all internet communities I have been a part of since 1999, the mods are the self recursive destructive force a nature. So, is it possible to eliminate needing them entirely? Yes. In fact, in an emotional intelligence network, it is necessary.
We talked about censorship a little bit earlier, about how it is an ineffective means to an end. Moderators, more or less, are the gatekeepers that prevent spammers from ruining the social network. They are human ad blockers, defending Rome from the barbarian invasion.
This seems like a noble role, until we invariably discover that something we’ve said has been censored, flagged as spam or as offensive, or tagged as “fake news”. Often ironically by a mod who was born after we had joined the community, or by a user who has a better “reputation” than us. The worst I have seen is an employee ban someone for violating the “ToS”, and then modify it after the fact to retroactively cover their toosh.
But it was not the barbarians that caused the fall of Rome, they were a needle that broke the camel’s back. Rome collapsed from within, and it does not take long for Troy’s moderator to be a Greek horse. Internet forums are like watching the feud of politics and history replay itself, again and again.
There is no emotional intelligence in the zero-sum game for power. Especially when that power is to censor, oppress, silence, and dismiss your opponent through bullying.
So how do we stop spammers from ruining our community if moderator powers do not pass the emotional intelligence test? If we combine points (2) and (3) from the sentiment analysis section, there are two new observations.
Isn’t this censorship? No. It does not prohibit any other user from seeking out and viewing that material. To each their own. Oddly enough, blocking ads seems to be a more controversial subject that banning body parts.
But what about server costs? Blocking advertisements is stealing away their revenue! That is unethical.
Does emotional intelligence obligate us to look at ads? No. If we do not want to be bullied, we should not bully others. Should I be able to physically coerce you to have sex with me? If the answer is clearly no, then how far can we take this? Does it also apply to sight, sounds, or smell? Should we be able to force somebody to smell our genitals? What about look at them?
If your answer is also no to that, it should be clear that there is no ethical justification to force someone to look at an advertisement. As a side note, this is also where public indecency laws came from in the first place, despite seeming Victorian to many. Sure, you can overt your eyes, but what we are groping — rrm, pardon, grappling — with here is obligation. There is none.
Will emotionally intelligent networks be ad powered? If ad blocking is no longer conflated with stealing, we still have the question of revenue. How do we offset server costs? Well, we do the same as we did to moderators. If we eliminate the need for servers, then there will no longer be any server costs.
But how?
Here is the moral argument for piracy. In an age of ad blocking, torrenting is the mechanism for free content distribution in order to replace and subsidize servers. If your users are “stealing” your content, what better way to make them pay for it than by hosting the content at their expense? Let us expand.
If nobody is pirating your content, then you should have no server costs (and it is probably a sign you need better content). If everybody is pirating your paywalled content, then that probably means you are charging too much for it. The fair market price is where those two things balance out.
That balance means both parties come out with a net gain, a win-win system. Non-zero sum games are the inevitable course of history. Why? Because mathematically speaking, win-lose has no growth, and lose-lose systems self destruct. So win-win systems always win over time.
But I’ll make less money without a paywall, that is loss, not a win!
Before you can monetize content you first must produce that content. Once it is already created, there is no guarantee it will be successful. So it is a risk either way. Certainly, giving away your content for free prevents further monetization, but it does not incur loss. At least for digital goods.
The consumer clearly wins, gaining knowledge from the content. But it is harder to understand the win for the producer. Popularity and influence is nice, but it does not exchange directly into cash. This is admittedly difficult. But let us tackle it by analyzing the extremes, like we did with piracy.
Here is the thing, torrenting won’t stop starving artists. But it may make them famous. Torrenting may however stop greedy studios from making cash. But that won’t stop a studio from trying to be greedy. Meaning torrenting neither stops starvation nor greed, those are behaviors.
So what about the non-hyperbolic? What about studios that have money to fund beautiful projects? Those projects need to be created by artists. And what better way to find talent than by looking at highly torrented starving artists? In an upvote only world, piracy is a vetted discovery mechanism.
This is the indirect exchange that results in a direct cash exchange in salary. That is, for however long an archaic system like cash stays around. Currency is stuck in a legacy system where economic goods are made of finite material incapable of being copied without labor. But this is no longer the case.
What that future economic system looks like is the subject of another article. But the only way to create any future is to create a bridge from the past. So we have one last issue that needs to be resolved. How do non-hyperbolic studios get compensated for the projects they fund? How do they win?
The content is not the product (unless it is in a finite medium), the seat is. In the film industry, this is already the case. Theaters have limited seats, and that is what is being sold. There are equivalents in other sectors, but those seats are useless without an experience. And this is where the studio profits.
A studio’s customer is a theater, and a theater’s product is the seat. Those things cannot be reproduced without excess labor, but the content can be. Each economic system only works within its own rules, and the digital era has broken the assumptions of the prior. But a bridge is possible.
The moral of the story here is that piracy is only bad with finite terms. But piracy in modern terms is not only good, it is the moral inevitable future for win-win economies of scale.
We do not have to theorize about the future of social networks, we can build them instead. It won’t be easy though, it is hard to transition society from one era to another. Technology is only a glimmer of what is to come, it is not a replacement for our social network — the people we know.
If we are not emotionally adept with colleagues or foes, we will burn the bridge to a better tomorrow. When discussions are shut down, when news is censored, when dissent is silenced, be wary not the side you stand on, but the hand that orchestrates the divide in the first place.
There is profit and power in ideological war, just as much as there is opportunity in coups and arms dealing. Both governments and corporations benefit from win-lose rifts, because it distracts people from the fact that they are selling bullets to both parties, or that Seattle’s Best is just Starbucks.
Our most precious resource is the people we know. Torrents may be necessary for the economics of tomorrow, but your social network is the wealth of the era beyond that one. This is far reaching, but the impacts are pivotal to our evolution into the Fourth Industrial revolution.
And that is why we need a new kind of social network, one based on emotional intelligence and tools that decentralize ideological power into the hands of individuals — not governments or companies. That is why we have built an Open Source database with end-to-end encrypted authorization.
Welcome to GUN, we are the arms dealer of data and ideas. Equipping the masses with digital weapons of autonomy, and cultivating an Open Source community with the morals of emotional intelligence. Because we believe the best way to win wars is to solve them before they even begin.
The future is yours, come create it with us today.