AI: The weapon of the Insurtechs

Written by mehrdadpiroozram | Published 2018/01/21
Tech Story Tags: artificial-intelligence | insurtech | insurance | venture-capital | weapon-of-the-insurtechs

TLDRvia the TL;DR App

The path to predictive insurance models leads through the intermediate step of real-time data aggregation!

The topic of Insurtech is raising growing interest. This is mainly due to the immense size and importance of the insurance market, however, can also be attributed to the promising new opportunities offered by new technologies. As we pointed out in our last column “The Five Insurtech Battles”, the applications are very diverse and players in the Insurtech space can be roughly divided into five categories. A unifying trait, however, is that many of these Insurtechs have the common approach to tackling their problems by leveraging data and Artificial Intelligence (AI), which we will discuss in more detail below.

Insurance companies have always been very professional and efficient IT organizations compared to other industries and data has always played a major role. Its analysis, however, often happens retrospectively by aggregating historical data and descriptive analysis of the the same, e.g. from past claims incidents.

The spread of sensor technology, for instance, provides the opportunity to better know the customer in damage-free conditions and thus to be more proactive, which will have enormous implications for future insurance products. Especially in this context, we believe that AI will bring about the biggest changes in the industry in the coming years.

But now let’s try to work through the situation in a structured way.

What actually is AI?

In the field of AI we try to develop automated systems that emulate human intelligence, or, in other words, perform tasks that in our understanding require some form of intelligence. These include e.g. tasks of perception, natural language processing (NLP), pattern recognition, inference, but also knowledge representation and robotics.AI is a technology that has already been horizontally incorporated into many areas of our everyday lives, such as virtual personal assistants, (semi-) autonomous cars, spam filters and referral services, but also into many very traditional industries, e.g. the steel industry.

Since the solutions to most problems tackled by AI systems are too complex to be “manually” defined, we use sophisticated techniques to automatically learn these from the data. While a single sheet of paper is enough to write down the logic rules of the game of chess in order to calculate the possible game outcomes of a move, this is no longer possible with more complex games. The rules do not necessarily have to be more complex, but the possible game situations are no longer calculable due to their sheer number. This is where machine learning (ML) techniques are used to automatically extract rules and patterns from the underlying data. A well-known example is AlphaGo developed by Google Deepmind which has learned to play the game of Go with a large number of training instances (here: different game situations) so well it ultimately defeated the world’s best Go player.

Why now?

Machine learning and AI are no new fields of research. Neural networks, which are the basis of deep-learning techniques that are very prominently represented in the press, are not new either. There have often been breakthroughs in the application of AI, each time causing a hype, followed by disappointment and a so-called “AI-winter”. Why is it different now? Today’s successes are made possible by three fundamental factors that will not disappear so quickly, and will even become more important:

  1. Advances in AI Research: Due to its ambitious goals and demanding nature, the field of AI has always attracted many researchers since its creation in the 1950s. From various perspectives, with different interests and motives, researchers of AI ​​and its contained/adjacent areas have made massive advances in AI research and applications in various domains in recent decades. A detailed list would go beyond the scope. For those interested, we refer to a brief history and a look into the future with Peter Stone et al. “Artificial Intelligence and Life in 2030”.The advances in research now can make an impact in various applications, mainly due to the circumstances described in the following two points.

  2. Massive computing capacity in the cloud, available to us at any time. While in the beginning algorithms had to be trained on individual machines, we developed ways for parallel processing of the machine instructions with connected computers and several parallel processors (CPUs), up to powerful graphics cards, with hundreds or thousands of processors operating in parallel (GPUs ). With the availability of high-performance systems in the cloud on demand and scalable as needed, there is virtually no barrier of entry for using computation-intensive applications.
  3. Data, data and more data. Many AI applications have only been facilitated by the large amounts of data available to us today, be it unstructured data, such as text documents, images and videos, or structured data that is predefined and machine-readable. The amount of data we created over the course of the entire year 2000 is now created over the course of a day. These may stem from services that provide data freely or proprietary sources (e.g. weather data, crime statistics, etc.), data from various platforms and social media (Youtube, Facebook, LinkedIn) or through our digital footprint on the web. An ever-growing factor that is of high importance to the insurance industry is data produced by sensors and the Internet of Things (IoT).

What does that mean for insurers / insurtechs?

In the past, insurance data was only available internally. A sizeable policy pool constituted a competitive advantage that needed protecting. Today the advantage can increasingly be gained by the combination of internal and external data sources. In doing so, the amount of data can be increased or the data can be enriched to include additional information.

The path to predictive insurance models leads through the intermediate step of real-time data aggregation.

The unique position given by the static data collected and kept in-house will presumably hardly play a role for insurers in 3 to 5 years from now or at least not provide any real competitive advantage. The data is constantly changing so that it needs to be retrievable and processable in real-time. From here it is only a small step moving away from the reactive business and towards predictive models, particularly for applications such as damage prevention.

Specific market examples

The opportunities given by this huge amount of data and its intelligent analysis have been recognized by Insurtechs along the entire value chain. In the following, we will illustrate this with some Insurtechs from our Venture Funnel.

The problem of intelligent fraud detection is tackled by the Insurtech company Getmeins. Therefore, user profiles consisting of activities, habits and lifestyles are first created and linked to other available open and insurer-specific data sources. Intelligent analyses using, for example, photogrammetry, adaptive algorithms and graph-based methods make it possible to detect risks and predict fraud.

The company PredictiveBid engages in the very beginning of the value chain of insurers, namely in customer acquisition. First and foremost, it is important to know who our customers are and what their value for the company is. Only then we can decide how to approach these and take action. In the market of online real-time bidding platforms, which is particularly expensive for insurers and their key search queries, the bidding strategies can be dynamically adjusted using AI techniques. In doing so, we can concentrate to win the most valuable of my customers as efficiently as possible.

The insurtech Cytora addresses the issue of risk assessment, selection and pricing. This involves linking external data sources, such as corporate websites, social media, news, and public or proprietary data sources, to insurers’ internal data sources. The data must therefore be continuously collected and structured using machine learning algorithms and cleaned up.

These examples show the enormous potential that lies in the combination of AI techniques with internal and external data sources and several other applications along the entire value chain will follow.

This article was originally written in German for Versicherungsmonitor.

Written by Dr. Babak Ahmadi, Founder of Insurers.AI and AI specialist at Widgetlabs GmbH, and Mehrdad Piroozram, serial entrepreneur, angel investor, founder and general manager of Insurtech.vc.


Published by HackerNoon on 2018/01/21