No selfrespecting artificial intelligence researcher would claim A.I. is going to take over the role of humans in this world any time soon. There are still many fundamental A.I. challenges that stop the rise of superhuman computers. These challenges will not disappear, not even in this data — or if you will “A.I. revolution” — that we are facing right now.
Through the example of an upcoming new field of science we are exploring at the computational sensemaking lab run by my colleague Martin Atzmüller I want to show what work is actually been done on the edge of fundamental and applied A.I. and why this is relevant for science, industry and society.
In this post I will argue that:
To quote A.I. researcher Melanie Mitchell “The race to commercialize A.I. has put enormous pressure on researchers to produce systems that work “well enough” on narrow tasks. But ultimately, the goal of developing trustworthy A.I. will require a deeper investigation into our own remarkable abilities and new insights into the cognitive mechanisms we ourselves use to reliably and robustly understand the world.”
I fully agree with the very impactful op-ed story Melanie wrote last november in the New York Times. She continues: “Today’s A.I. systems sorely lack the essence of human intelligence: understanding the situations we experience, being able to grasp their meaning.” It is a core feature of humans to understand context, multiple intrepretations and nuance.
Take for example the following image of sandy curves:
Photo credit: Bureau of Land Management California
Computers have the greatest difficulty understanding the difference between above sandy curves and for example the human curves below:
Photo credit: Heitor Magno
This is a case where humans outperform computers in understanding the world around them on a fundamental level.
And although I’d love to discuss fundamental issues in A.I. I will leave it at this for now. There is a lot more interesting research to do. The point is that industry might just be able to help dealing with fundamental issues like these, by doing proof of concepts applying A.I. in the real world. And the possible applications might surprise you.
Returning to Martin’s computational sensemaking lab. Last year we understood that with the emergence of the Internet of Things, smart devices, and ubiquitous computing, computational sense making enables the collection of multi-modal interaction data at an unprecedented scale. These new technologies enable insights into human behavior, as well as to enable structural modeling and the analysis of social interaction structures.
Current research on happiness mainly focuses on self-reporting like the day reconstruction method and experience sampling. Although these research methods have strong advantages, they also have disadvantages which can be compensated by use of mobile devices, sensor networks and wearable sensors.
Figure from Towards Estimating Happiness Using Social Sensing, Atzmüller, Kolkman, Liebregts & Haring
The way we see it, objective data can be measured in order to allow predictions, and ultimately also to influence individuals (for adapting/changing their behavior) in order to increase their well-being and happiness, e. g., using recommendations.
Ideas are great and our paper was accepted 🎉. Some more tradditional scientists would think that they are done when publishing a paper. But we think this work is only the beginning. We want to take our work out of our lab and proof it in industry. Not only that it works, but also that you can make money with it.
Figure from Designing Intelligent Disruption
Companies like Hitachi are already active in the area of happiness measurement. So this line of research and business is less farfetched than you might think. We have studied several use cases, with school children and professional teams where we see potential business models. It is also true that we didn’t find the right funding for running an actual proof of concept. This blogpost is an open invitation for industry to participate with this important work.
Once we have raised the money and build the first proof of concept I will report back to you. For now I will get back to work 😊.
Just to be clear, in this post I argued that:
Adding sensors to our world, and having algorithms try figure out what is happening based on sensor data is a very important line of research in the heart of A.I. But this quest is maybe even more relevant to industry. Together we can join forces and mutually benefit from all the new and exciting possibilities.