Author:
(1) Salvatore Iaconesi, ISIA Design Florence and *Corresponding author ([email protected]).
4. Interface and Data Biopolitics
5. Conclusions: Implications for Design and References
This scenario also describes a progressive asymmetry in the distribution of power, rights, freedoms and opportunities (Tufekci, 2014; boyd, 2012).
As a matter of fact, it is practically and psychologically impossible for human beings to understand which and how much data is captured about them, how and why it is used, and what effects it has about their lives.
The complex interplay among users; organisation; algorithms; national, international and global regulations and agreements, or lack of them; data and information flows within user experiences in the physical and online domains cause grey areas to emerge, at levels which are legal, cultural, psychological, ethical and philosophical (White, 2016).
"Code is Law", Lawrence Lessig (2006) once said. And this is really the case nowadays. With thousands of updates and modifications to the interfaces, algorithms, data capture and usage profiles which are performed each month to the systems of popular services, potentially provoking radical changes to the implications for privacy, control and accountability, it is practically impossible for legal and cultural systems not only to adapt and react, but also and more importantly to perceive such changes and the effects they have on our freedoms, rights and expectations. If a national government needs to pass through a whole legislative process to approve a new privacy law, an operator like Facebook can change a few lines of code and yield substantial impact on users’ privacy profiles. With hundreds of thousands of modifications on platforms like these each year, it is easy to comprehend the reach of this kind of issue. Moreover, many of these changes are temporary, beta versions, running in parallel for different users for A/B testing purposes, making the situation even more complex.
Things get even more radical in the case of algorithmic governance of processes, where technological entities assume progressively higher degrees of agency (and opacity). The Flash Crashes of the stock markets in 2010 are a demonstration: autonomous algorithmic agents gone berserk causing losses for billions of dollars, outside of any legal or cultural or perceptive framework (Menkveld, 2013).
As a result, the levels of power and control exercised on human beings and their societies by the systems that they use are augmenting at exponential levels, and there are progressively fewer and less effective ways for people to perceive and comprehend such processes.
On top of all of this, the dissemination of interfaces ubiquitously across devices, applications, websites and other products and services for which today everything can represent a front-end for digital and data based systems, further augments the incapability to understand the data and information which is captured from our behaviour and its flows and uses (Weber, 2010).
Sharing a picture of our holidays at the beach on social networking sites does not imply the fact that it is clear, for us, that we are producing marketing relevant data about our tastes, consumption levels and geographical locations. And neither is the fact that while using wearable technologies or smart IoT appliances in our daily lives the data that gets captured can be used for marketing, health, insurance, financial and even job purposes.
Furthermore, the rise of the Stacks (Madrigal, 2012) and, more in general, of “walled gardens”, or those situations in which applications, services and products pertain to closed, proprietary ecosystems which are not open source and for which both the front-ends and back-ends of the systems are opaque and inaccessible for inspection and understanding further aggravate this problem.
Both those applications directly and, indirectly, the service levels they provide (for example through APIs, social logins, application frameworks) on the one hand make applications and services easy and rapid to develop and deploy, but, on the other hand, subject them to the concentration of power which these large operators represent. It is very convenient to design and develop anything from online services to network-connected physical products using, for example, Google’s, Apple’s, or Facebook’s platforms and services. But, by doing this, it is automatic that our products and services start producing data and information for these large operators, allowing them to interconnect these across a rich variety of domains: if I develop application A and someone else develops application B which is completely different, and we both use, for example, Facebook’s social login to implement access services, Facebook will benefit from the data generated from both applications, from the analytics which it desires to capture without even sharing them with A or B, and will be also able to interconnect both data flows with their own. For example, if application A captures, for example, my geographic location (it is, for example, an application which allows me to find where I parked my car) and I have configured my Facebook account so that Facebook is not allowed to know my geographic location, Facebook will have my position anyway, through application A. This kind of reasoning can be applied to all the applications, products and services that use these frameworks.
These facts are valid and relevant for the users of these platforms, but also for the people conceiving and creating these systems, including designers, engineers, managers, administrators, public and private, who progressively lose the possibility (culturally and technically) to understand the implications of their designs.
This paper is available on arxiv under CC BY 4.0 DEED license.