The path to predictive insurance models leads through the intermediate step of real-time data aggregation!
The topic of Insurtech is raising growing interest. This is mainly due to the immense size and importance of the insurance market, however, can also be attributed to the promising new opportunities offered by new technologies. As we pointed out in our last column “The Five Insurtech Battles”, the applications are very diverse and players in the Insurtech space can be roughly divided into five categories. A unifying trait, however, is that many of these Insurtechs have the common approach to tackling their problems by leveraging data and Artificial Intelligence (AI), which we will discuss in more detail below.
Insurance companies have always been very professional and efficient IT organizations compared to other industries and data has always played a major role. Its analysis, however, often happens retrospectively by aggregating historical data and descriptive analysis of the the same, e.g. from past claims incidents.
The spread of sensor technology, for instance, provides the opportunity to better know the customer in damage-free conditions and thus to be more proactive, which will have enormous implications for future insurance products. Especially in this context, we believe that AI will bring about the biggest changes in the industry in the coming years.
But now let’s try to work through the situation in a structured way.
In the field of AI we try to develop automated systems that emulate human intelligence, or, in other words, perform tasks that in our understanding require some form of intelligence. These include e.g. tasks of perception, natural language processing (NLP), pattern recognition, inference, but also knowledge representation and robotics.AI is a technology that has already been horizontally incorporated into many areas of our everyday lives, such as virtual personal assistants, (semi-) autonomous cars, spam filters and referral services, but also into many very traditional industries, e.g. the steel industry.
Since the solutions to most problems tackled by AI systems are too complex to be “manually” defined, we use sophisticated techniques to automatically learn these from the data. While a single sheet of paper is enough to write down the logic rules of the game of chess in order to calculate the possible game outcomes of a move, this is no longer possible with more complex games. The rules do not necessarily have to be more complex, but the possible game situations are no longer calculable due to their sheer number. This is where machine learning (ML) techniques are used to automatically extract rules and patterns from the underlying data. A well-known example is AlphaGo developed by Google Deepmind which has learned to play the game of Go with a large number of training instances (here: different game situations) so well it ultimately defeated the world’s best Go player.
Machine learning and AI are no new fields of research. Neural networks, which are the basis of deep-learning techniques that are very prominently represented in the press, are not new either. There have often been breakthroughs in the application of AI, each time causing a hype, followed by disappointment and a so-called “AI-winter”. Why is it different now? Today’s successes are made possible by three fundamental factors that will not disappear so quickly, and will even become more important:
Advances in AI Research: Due to its ambitious goals and demanding nature, the field of AI has always attracted many researchers since its creation in the 1950s. From various perspectives, with different interests and motives, researchers of AI and its contained/adjacent areas have made massive advances in AI research and applications in various domains in recent decades. A detailed list would go beyond the scope. For those interested, we refer to a brief history and a look into the future with Peter Stone et al. “Artificial Intelligence and Life in 2030”.The advances in research now can make an impact in various applications, mainly due to the circumstances described in the following two points.
In the past, insurance data was only available internally. A sizeable policy pool constituted a competitive advantage that needed protecting. Today the advantage can increasingly be gained by the combination of internal and external data sources. In doing so, the amount of data can be increased or the data can be enriched to include additional information.
The path to predictive insurance models leads through the intermediate step of real-time data aggregation.
The unique position given by the static data collected and kept in-house will presumably hardly play a role for insurers in 3 to 5 years from now or at least not provide any real competitive advantage. The data is constantly changing so that it needs to be retrievable and processable in real-time. From here it is only a small step moving away from the reactive business and towards predictive models, particularly for applications such as damage prevention.
The opportunities given by this huge amount of data and its intelligent analysis have been recognized by Insurtechs along the entire value chain. In the following, we will illustrate this with some Insurtechs from our Venture Funnel.
The problem of intelligent fraud detection is tackled by the Insurtech company Getmeins. Therefore, user profiles consisting of activities, habits and lifestyles are first created and linked to other available open and insurer-specific data sources. Intelligent analyses using, for example, photogrammetry, adaptive algorithms and graph-based methods make it possible to detect risks and predict fraud.
The company PredictiveBid engages in the very beginning of the value chain of insurers, namely in customer acquisition. First and foremost, it is important to know who our customers are and what their value for the company is. Only then we can decide how to approach these and take action. In the market of online real-time bidding platforms, which is particularly expensive for insurers and their key search queries, the bidding strategies can be dynamically adjusted using AI techniques. In doing so, we can concentrate to win the most valuable of my customers as efficiently as possible.
The insurtech Cytora addresses the issue of risk assessment, selection and pricing. This involves linking external data sources, such as corporate websites, social media, news, and public or proprietary data sources, to insurers’ internal data sources. The data must therefore be continuously collected and structured using machine learning algorithms and cleaned up.
These examples show the enormous potential that lies in the combination of AI techniques with internal and external data sources and several other applications along the entire value chain will follow.
This article was originally written in German for Versicherungsmonitor.
Written by Dr. Babak Ahmadi, Founder of Insurers.AI and AI specialist at Widgetlabs GmbH, and Mehrdad Piroozram, serial entrepreneur, angel investor, founder and general manager of Insurtech.vc.