When we think of any activity related to transhumanism, we can rely on the idea that it’s because we want to enhance the functioning performance of our bodies. Bodies is written in plural, here, because there are many possibilities to “wear a body” (from physical disabilities to transexualism; from accidental events to personal choices). We can even see our personal decisions as if they were merely the result of a condensed amount of information that circulate through our organs: spatial recognition through the sense of vision, data transmission though speech, data storage and accurate data recall using a memory system, and many other things that should help our living being to thrive in this world. But this version of our human condition has also a lot to do with our possibility to stay healthy. Which means that our ability to maintain the good functioning of our organism is to help it survive or succeed in life. In summary, our medical condition can improve our social activity.
However, our actual world is not made of a fantastic humanoid species that wouldn’t even need to sleep nor eat to maintain its level of energy balanced. Actual humans have vulnerabilities and especially moral tendencies. When we are part of a community, we tend to appreciate the sense of dignity that circulates among our peers. We tend to learn to respect each other in order to maintain our sense of self and integrity. So, what would happen if there wouldn’t be any more border between our social activity and our medical condition? How would it be if our social activity could predict our medical situation?
Actually, what is our situation about social and medical data? Apparently there’s a gap between the two. The difference is that medical records is under the protection of the law and confidentiality. Personal information such as data shared on social media doesn’t have that much secrecy. While their common enemy remains the same: public exposure. Some persons are comfortable sharing their medical condition to others, but the status of their medical records doesn’t change, it is still legally protected. On the other side, some people don’t necessarily feel as easy even with their personal information they share online through social medias. What is the purpose of keeping data protected or private, then?
In fact, the purpose of keeping medical data classified and private is because of their vulnerabilities to abuse. Medical data is one of the highest vulnerable information. The amount of sensitivity of a medical record is kept under protection because of the amount of harm it could cause if it were divulged easily. The public announcement of a medical problem can cause other persons to see the ill one with a different eye or to take advantage of the illness. The consequences of this knowledge could be more or less lethal and could cause some intolerant treatment within the workplace. This protection supposes that someone shouldn’t be discriminated by their medical condition.
But what about social data, the information that we consensually share to each other about our own moral features sometimes? What’s the difference and should there be any? One of the difference between the two sets of personal data is that medical data are mostly pieces of information collected and registered by healthcare professionals. However, both social and medical data are pieces of personal information that can be as much jeopardizing for one single person. Let’s think of moral harassment and especially blackmailing, for example, where someone can have his or her reputation threatened just because of some digital profiling realized online! Since the case with Cambridge Analytica, it now seems acceptable to realize some psychological profiling, apparently.
So, what would happen if one person’s medical record could be attached to his or her social data shared online? Well, let’s see. First, when some piece of medical data is registered in a data base, it is supposed to be done under the control of certified professionals from the healthcare system. However, when it comes to mental illness, the diagnosis can be a bit obscure. In this case, when it comes to mental illness, it can happen that some professionals can disagree on the diagnosis. So, it is possible, in the mental health care system just as much as the other healthcare centers, to have different outcomes. We may have three possible outcomes: an agreed diagnosis that can be then treated alongside the DMS-V suggestions, or a diagnosis that doesn’t fit the experience of the professionals who got charge of the illness (in this case this means the patient got to meet different practitioners), or there’s the case where some professionals completely disagree on the definition of a mental health issue yet found by some other colleagues (which also implies the consultation of several doctors). The problem of these uncertainties is the potential of false positive. Because these disagreements can happen among humans, why it wouldn’t happen with a machine?
Why was this precision needed about our topic regarding social data versus medical data? Because if we get to live in a world that combines the two of them, there will be circumstances that would potentially add mental health issues along the analysis of the social data, and there might be catastrophic consequences. In a few years it will definitively be easy to find any recorded human on at least one social network. Imagine, in fact, if we could relate the medical records of one single person to his personal information shared online though social media? The idea is not new, and social surveillance is already in place by the government in order to prevent terrorist acts, so the globalization of the practice might be on its way. What can we do about it now?
Let’s think about it, then. And that’s what I want to do here, because the potential of analysis that we have today have attained such amount of performance that it would only be a question of ethics not to automatically combine the two. The algorithms we create today are able to optimize themselves, which means that they would be able to decide their own standards and set their own limits by themselves as well. Which also means that it’s just a question of time before our mental health care system will exclusively rely on some automated decisions.
Google efforts to create a machine learning and augmented reality-powered microscope for real-time detection of cancer, helping to make pathologists more efficient and ultimately to save patient lives.
The amount of data collected today is too enormous to be denied or taken aside. We have overstepped our possibilities to regain our privacy entirely, for instance. Our sense of privacy might no longer belong to us after a few years, if we let it be. Our analyzed patterns of behavior will set a persona that is supposed to resemble the shadow that follows us everyday (and the potential of data analysts with a psychology background for working on virtual profiling will hold a certain place in that kind of practice!). The difference with the real world is that our actual shadow only follows us until we are dead, while a ghost that holds our data will probably live forever, for the sake of keeping the future of automated decisions much more accurate over time. Because more data an artificial intelligence owns, better it will be for them to create patterns of recognition, and we are not just born from pollen spread in the nature, we are born with a history. Which means that our history has a history! The first history that follows us is our family. Our family have their own ghosts, each member of the family has their personal shadow that will follow them until they die. And as said previously, more the machine is fed with data, more accurate its decisions will be…
What’s the problem about medical data combined with social data here? What should it matter that they share the same legal status? As we know, the amount of performance of our actual algorithms is able to predict unhealthy patterns with faster precisions than any human could do accurately. The percentage of validity of those predictions are getting higher and higher over time. This means that the algorithms of the future will find unhealthy patterns within a body with a disarming facility. Will it get as accurate when it comes to mental diagnosis though? Will this facility allows these algorithms to open a new market? Will our medical data gain a new capitalistic value? If no, why not? If yes, for what purpose? What would be the purpose of capitalizing on such accurate predictions?
Facebook once tried to find patterns of depression among its users in order to provide them some help, especially when it comes to suicide. What if there were paid advertisements that were related to those activities of prediction (especially when the health care system is private)? What if the social network earned some of its money from these predictive algorithms concerning suicide? Any social media that claims to care for their user the way Facebook did can be very interested in optimizing this type of algorithms. Which means that working on the optimization of predictive analytics is the beginning of a new era: the one that can capitalize on vulnerabilities. Where would we go if our virtual behaviors were scanned to the point that it would benefit some services which would be able to provide parents some moral comfort about their kids: the fact that some engineers and designers would work on building the predictive algorithms just so some parents would feel less worried about their children having a depressive crisis, because the machine would be able to warn them before their own kids would realize how suicidal or depressive they are? It doesn’t sound as alarming, stated that way, but it’s one of the example that can state that medical records can be capitalized, as no physical diagnosis can be realized through social media.
So here’s my question, after all: what would happen if medical & social data shared the same legal status, finally? Who would benefit from a locked system that protects personal information? Maybe not a system that want to capitalize on medical issues. Why? Because if we dig into the dystopia, what if the argument that goes along with this monetization was claiming that some mental health issues could be potentially harmful for the society and that it would then become necessary to have some kind of “legitimate” control? Who will decide what is harmful? What could be the consequences of analyzing personal information shared voluntary on social networks as if they were potential prediction related to mental health issues? Who will set the diagnosis? How will the machine recognize what is actually a mental illness? From which experience? So, how to add the ethics that can or should regulate that kind of artificial diagnosis?
In my article here, the main question is why social data couldn’t benefit from the same protection than medical data if our future tendencies are to realize some profiling that might overstep on issues related to mental illness? Because there might be problems related to confidentiality and especially on accuracy. Let’s dig into it, because transhumanism is probably the movement who would be the most interested about the combination!