2018 was a big year for ethics and rights, as new forms of technology and media clashed with what we consider acceptable behaviour in a human society. Now, with companies like ours building digital tools that are ever more personal, we need to continue to develop our notion of what it means to do the right thing.
We believe that a key component of the coming generation of AI-powered technology will be digital empathy. By plugging our physiological and behavioural data into smart systems, they will be able to understand our moods and learn to respond appropriately. We can look forward to this empathic technology bringing us richer, more rewarding human-machine interaction. But it also gives us plenty to be concerned about.
Below are some of the main issues that my team and I are considering a lot these days, as our species ventures into an unmapped landscape of increasingly smart, connected technology.
Addressing each in turn then…
The one thing we know about the future is that we don’t know anything. Nobody can tell you exactly what flavours of human-machine interaction will materialise over the coming years or where exactly to draw the ethical line on them. No doubt our world will eventually be furnished with appropriate legislation, guidance and norms but they will probably follow some early mistakes, not preempt them.
A recent example is GDPR, which launched here in Europe this year, in response to years of questionable treatment of users’ personal data. A measure like GDPR is important but it is also a relatively small step, following behind the huge technological and social lunges we have been taking as a species. Formal policy is often outdated by the time it is released.
It’s not just that legislation often comes around too late. It may also be useful, even necessary, for the law to stay out of the way in the early stages of disruptive innovation. We’ve seen this with the extent to which the internet has maintained a Wild West-style freedom of access. This may have led to pains like fake news and industry-crushing data piracy, but it has also united, educated and entertained the world like nothing before it. Hence the current stink about preserving net neutrality, as the world wrestles for a settled position in the long term.
I suspect we will see a similar evolution unfold for the kind of empathic technology that my peers and I are developing, which is currently just entering the initial lift in the innovation-adoption curve. A few disparate services have emerged and people are starting to find out what works for them and what doesn’t. Over the next handful of years, riding the coattails of the AI explosion, companies like ours are venturing on empathic tech becoming ubiquitous, bundled as part of bigger packages like smart vehicles, smartphones, smartwatches, smart homes… ‘smart’ anything. During that transitional phase we can’t expect government and law to help keep us on the right path, and perhaps we shouldn’t. Instead, we need to lean on the fuzzier framework of social norms and heuristics. You know, good ol’ human relationships.
I fear it would be a waste of energy to lobby people not to use services that they don’t entirely trust from the outset, even though such behaviour should be a given. I know I’m guilty of it. We’ve seen how easily people give up their data in return for entertaining or useful services, especially if they get them cheaper. We saw this with social networks, which are traded for free against our provision of personal data, and now we have to deal with shit like the Cambridge Analytica scandal. Okay, so users should be discerning, but on the flip-side organisations can’t dodge responsibility by punting the bullshit line that their users don’t mind sacrificing a little freedom and privacy to get stuff for free.
Now mechanisms like GDPR are standing up to personal data abuse but people have still been burned along the way. Organisations can’t wait for legislation before taking responsibility on their own initiatives to mitigate such invasive incidents.
It feels like a total cop-out to say this, but organisations (public and private) who adopt intimate behaviour like physiological or psychological data analysis simply need to do the right thing. They need to avoid exploiting their users. And us users need to keep our eyes open for the signs of such exploitation, rather than blindly adopting the next shiny feature offered to us.
Some of the principles behind GDPR might provide a good kick-off for the steps we should take towards the next generation of digital rights, rules and ethics. Underlying GDPR is increased empowerment and ownership for the user, with respect to their data. To the individual user, GDPR aims to provide rights over their data, such as to be informed, gain access, and erase it. That’s a good start but what if we extrapolate it further? With a little imagination, perhaps we could work towards a more sweeping regulation framework for our digital lives.
Ultimately, every individual user should own their personal data wherever it travels, globally, and have the power to govern how it is used. Of course there are massive practicality issues with this notion of global data ownership. It would not only require huge community support but also some changes to the fundamental architecture of digital services. But technologies like the blockchain are already promising potential solutions, such as digital self-sovereign identity (as neatly explained here).
2018 marked the 70th anniversary of the Universal Declaration of Human Rights. Things have changed since then.
Just six days before the UDHR was issued, George Orwell submitted his final manuscript for Nineteen Eighty-Four, describing a dystopian future of universal surveillance that has continued to feel more pertinent with each passing decade. In the same year, Manchester University, fired up the world’s first electronic stored-program computer. It consisted of seventeen instructions. How differently do we need to think about rights now, when each of us carries an incomprehensibly powerful computer in our pocket? What will constitute appropriate behaviour when technology can think for itself?
It is not so much that we are marching blindly into an age of AI-enabled technology, that the tech is rising up to enshroud us where we stand. Doing nothing won’t keep you away from increasingly smart, complex and pervasive digital systems. Accordingly, any provisions to protect and empower us must be as wide-reaching as the innovation. Perhaps a new declaration of universal rights is needed for the generations that will live with AI and IoT. If so, these rights may need to apply beyond what most people would currently consider ‘human’.
Our species has a long history of self-modification, augmenting our natural abilities with technology, from fire and clothing to hearing aids and smartphones. It is easy to promote a different set of ethics for ‘them’ machines as for ‘us’ humans, while we continue to blur the lines between the two. At the same time, with every data-point we generate through our online activities, we sculpt ever more detailed digital versions of ourselves. Thus, with the continued blending of human, cyborg, AI and digital self, we may have to devise a set of ethics that applies to the whole continuum — perhaps eliciting a Universal Declaration of Transhuman Rights.
We’re a startup, still glaring through rosy lenses at the digital utopia we imagine building for future generations. It’s easy for us to preach and prophesy at this early stage, while technology like empathic artificial intelligence is still nascent. We are only starting to stake out our ethical position on this new ground. But already we face commercially driven challenges to the kind of company we want to be. It’s not easy to say no to clients and projects that could secure your business for the next few months. We’ve done it a few times but it’s financially painful. Additionally, when we publicly define our ethical standpoint in an inherently unknowable future — claims that could affect our reputation or limit our addressable market — we feel vulnerable, exposed.
But we’re not alone. Anyone building a business model that is significantly reliant on the storage, analysis or transaction of personal user data, or any other intimate human-machine interaction, cannot ignore the ethical implications of their work. They must draw a line and stay behind it. As scary as it might be to do this, it’s not courageous, it’s just the right thing to do. But take heart, we’ve found that our clients and customers, which are typically huge global brands, appreciate it. They’re run by humans too. And given the choice, many of them prefer the responsible path.
Doing the right thing is often good for business.
Back in May, our CEO & Co-Founder Gawain Morrison sat on some of the panels at RightsCon, where members of a global community were gathered to discuss a wide range of issues about rights in the digital age. A few days later, GDPR kicked in. Positive activities like these are playing out against a backdrop of some hugely contentious events like the Facebook scandal and the net neutrality debate. There’s something in the air right now.
This piece is only intended to be a broad kick-off exercise to extend a conversation to a wider audience. Much more needs to be said, and done. Details need to be fleshed out. Solutions need to be tested, selected and implemented. But we don’t want to leave it any longer.
As we continue to expose our machines to increasingly intimate and accurate data about our bodies, minds and actions, our machines are simultaneously creeping further towards our position on the spectrum of human identity. We are plugging ourselves into the Internet of Just-About-Everything, which is making our digital selves smarter, more pervasive and more capable by the day. Our digital selves need our love, and we need to figure out a new paradigm of ethics and rights for our digital future.
A couple of related stories from us: