Social psychologist interested in ethical technology :)
A psychologist’s perspective on why, how and when digital privacy violations irk us
Based on his field work, the world-renown anthropologist Richard Shweder has discovered that societies are constructed from three ethics: autonomy, community, and divinity. Autonomy largely concerns our personal sphere: how we construct our individuality and manage our private thoughts and feelings. The communal sphere contains the social norms that keep our society running smoothly and uniformly (driving on the right side of the road, avoiding fashion faux-pas, etc.). The final sphere, the divinity sphere involves religious and moral beliefs and rituals.
Examples of moral reasoning at each of Lawrence Kholberg’s stages of development.
This theory has been adapted to understanding how and when moral development occurs. Psychologists like Lawrence Kholberg generally used to believe that children developed these ethics sequentially. First, a child’s sense of right and wrong depended upon whether an action would or would not benefit themselves (“autonomy”), then they would develop a sense of right and wrong based on social convention (is it legal or illegal?; “community”), until finally, they analyzed right and wrong based on post-conventional concerns (is the law just?, “divinity”). However, Elliot Turiel and Larry Nucci developed Social Domain Theory after pointing out the sequential development of these spheres is fallacious — even young children understand that Shweder’s domains are indeed distinct. They generally understand that there’s nothing morally wrong about eating with hands (but people think it’s strange), the color of underwear they’re wearing is no one’s business, and unsolicited harm is wrong even if rules allow it. The personal sphere doesn’t become irrelevant over time, and the divinity sphere isn’t necessarily the end-all and be-all.
These three social spheres often intersect without problems (i.e. we think we should control who has access to our diary, and this is supported by the moral belief that it’s wrong for someone to violate our trust by reading our diary without permission). But sometimes these spheres rub against each other, create friction, and draw attention to how various spheres might call for different courses of action. This is often reflected in the long, slow dance of social progress, in which social convention is pitted against moral concerns, and in which privacy sometimes shares a contentious relationship with government regulation.
Digital rights is a relatively new arena for these three social spheres to work out their jurisdiction. As I’ve spoken with many people about issues surrounding surveillance capitalism, one theme has emerged: there’s no real consensus on whether people find it problematic, to what degree, and why. Some take a strong Neoliberal position: it’s a savvy business model, what’s wrong with a person making money off of data users willingly give away? Others are bothered by it, but can’t put their finger on why. A few are livid or terrified, fearing some sort of algorithmic fascism. And still some are pragmatic: I don’t have anything to hide, and I like the services I get — so why should I care? The mixed views tend to be rooted in greater concern for different spheres, and perhaps a lack of clarity regarding surveillance capitalism’s basic underpinning issues.
Carol Dweck’s seminal book about growth versus fixed mindsets, and perceived control over achievements.
Autonomy is thought to be intricately tied with one’s sense of identity. That is — without power to construe or control one’s surroundings, its unclear what the self might mean. Psychologists tend to take for granted that having autonomy is a necessary component of healthy psychological functioning. Indeed, “learned helplessness,” or a chronic “external locus of control” (both of which refer to a sustained sense that one cannot control the outcomes of their lives) lead to depression. Carol Dweck, a Stanford psychologist has popularized this concept as “mindsets,” and found that fixed mindsets tend to stunt achievements — with her most notable findings occurring in academic performance. When people feel they are in control, they take better control. What does all this have to do with privacy?
Privacy is the realm in which autonomy is assured. Even when we are required to wear uniforms by authorities, we maintain our autonomy by choosing what underwear to wear (or not!). “It’s nobody’s business” quickly conveys the message that a person has control over that aspect of their lives. In an interview with Larry Nucci, he even noted that people working under dictatorial rule or in death camps have been known to distinguish themselves from one another in the smallest of ways — tying an extra knot in their uniform filigree, for instance. This gives them some semblance of sanity and dignity amidst identity crises borne of too much external powers.
When it comes to understanding this sphere in the context of surveillance capitalism, then, an important concern is whether the enterprise could potentially undermine our sense of self. And, if so, how? People who are particularly frustrated or frightened by surveillance capitalism find Facebook’s mass-scale emotional experimentation disturbing. Facebook’s infamous study of emotional contagion demonstrated that they could manipulate massive numbers of users to feel happier or sadder. Part of the reason is that emotions and thoughts are typically thought to be the heart of privacy. That no one can manipulate our thoughts and feelings created a buffer for us to consider our responses and interactions in the world.
Layer on top of this, that big tech has a vested interest in behavioral modification, and the picture becomes troubling to most. Shoshana Zobhoff, professor emerita of Harvard Business School and author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, has noted that large tech companies time delivery of ads to match the moods in which people are most likely to purchase items. What is autonomy, what is the self, when our emotions are manipulated and our purchasing or voting behaviors are manipulated — without our knowledge? Does the self exist in a meaningful way? The business of surveillance capitalism has been to stealthily modify our private thoughts and emotions — the parts of us that make us feel like us.
Many psychologists refer to humans as “an ultra-social species.” More than any other animal on the planet, we are cooperative. We have common roads, we have manners, and we have assembly lines. We rely on one another like nothing the world has ever seen. These relationships are an important aspect of what makes us human. The “need to belong” is a very basic human need, along with needs like shelter and sleep. It’s not always a good thing, as the infamous Milgram shock experiments and college hazing rituals attest.
As is the case with autonomy, however, sociality is part of healthy functioning— people who are depressed tend to withdraw from social interactions, including social media. The experience of isolation is extremely negative — as seen in the aftermath of long-term solitary confinement. The intensity of the need to belong is also illustrated by neuroimaging studies which have revealed that loneliness activates brain activity in the same regions as experiencing physical pain.
This extreme sociality requires coordination, sometimes in the form of laws, but often in the form of tacit norms. We can see that many norms have evolved for promoting sociality on the web. One is that we are usually up-front with identities now, as opposed to using bizarre handles like we did in the 90s. It’s standard for a party-goer to look up directions to a house, rather than a host providing directions to invitees. If you are working on a project with a colleague, you send an email. If you are coordinating getting coffee with a friend, you send texts.
When it comes to surveillance capitalism, an important aspect for consideration is whether or not people use big tech as an avenue to find social belonging. It seems that, at least in the case of social media, the answer must be yes. In many other cases the answer may be yes as well (as in the email and navigation norms mentioned above). And if it is the case that big tech is an avenue to social belonging, we must ask to what degree individuals are coerced into clickwrap terms of service. In short: if an individual were to disagree with terms of service, to what degree are they sacrificing social capital? In my view, this should be an indication of the degree to which one’s consent to be data-mined is coerced, and therefore illegitimate.
In the context of surveillance capitalism, the moral sphere hinges on a question of big tech rights and user harm. If big tech is unable to mine data, what are the consequences for the corporations? What about to users who may lose these services if they go out of business? Certainly it seems that big tech would likely take a big loss unless they could redesign their monetization (either by charging for services, through untargeted ads, or through media partnerships, etc.). It would seem that heavy regulations are “unfair” to these innovative big corporations and their shareholders, who only make supposedly ongoing, “consenting” contracts with many users who are free to leave at any time. On the other hand, one might ask on what moral grounds they have a right to monopolize human network effects, or to invade privacy even with a contract?
The moral sphere, in this case, seems to largely echo back to the social and privacy spheres. Is it okay to pit a person’s desire for privacy against their desire for social connection? If not, is it okay to disrupt monumental achievements of entrepreneurs?
Anyone who’s read my blogs before knows I have a strong bias toward decentralized tech. Decentralizing the web is returning the decision-making to the hands of all stakeholders. You get to decide who gets the keys to your diary. You get to decide who to share your data with — no monopoly stands between you and connecting with others. I think decentralized technologies are the only technologies which prevent the emergence of dilemmas between the privacy and social spheres.
I am building out the egalitarian infrastructure of the decentralized web with ERA. If you enjoyed this article, it would mean a lot if you gave it a clap, shared it, and connected with me on Twitter! You can also subscribe to watch or listen to my podcast!