01:35 PM
How Insurers Can Optimize Legacy Systems to Handle Big Data's BI Challenges
A number of technologies have reached a level of sophistication sufficient to merit serious attention from the owners of insurance business intelligence (BI) systems. Intelligent devices such as smartphones and tablets, in-vehicle geolocation devices, social networking sites — even micro blogging feeds such as Twitter — are a few of the new sources offering a potentially rich and steady supply of important data streams. As a result of this data explosion, data sciences such as predictive modeling and predictive analytics are set to take on an even greater importance. With more data, it becomes possible to build more sophisticated back-end and front-office analytical applications — but that is only half the story.
Philippe Torres, InEdge
The new sources of data, along with the coming-of-age of predictive analytics, must themselves be understood within the wider context of ever-increasing storage capacities and plunging storage costs. Inevitably, more data will be retained for even longer — one aspect of a phenomenon currently being referred to as “big data," which can be further described as datasets too large for traditional management that are characterized by increased volume, velocity and variety — the so-called three "V"s of Big Data.
Can Insurance BI systems be adapted to take advantage of this explosion in data, while accommodating new, predictive models? Carriers have enough trouble getting answers out of data warehouses filled with known, well-behaved data. The notion of integrating multiple new, even “live” data streams may be intimidating. Nevertheless, with the right approach carriers can take advantage of this trend and ride the technology curve to their great advantage.
Having mastered data warehousing, data structures, and the ability to comb through data, the next phase is building models for data mining and predictive modeling. To get there requires a specialized analytical framework called a “gap analysis.” It is a framework that many carriers may not be able to develop entirely on their own.
The gap analysis can evaluate, fine-tune and troubleshoot mature Insurance BI systems. It covers the bases before upgrading or moving to the next phase of a long-term, ongoing BI and analytics roadmap. For BI systems experiencing difficulties, factors that prompt a gap analysis can include low user-adoption of the BI system, the inability to extract information, or the presence of unreliable or conflicting information. For projects under development, cost overruns or late deliverables can prompt a gap analysis.
Many carriers update their BI and analytics roadmap once every three or four years. Unfortunately, by the time a BI program attains maturity, the scene has often changed enough to warrant another look. A gap analysis can reveal the real user constituencies and help implement changes in the right order, on the correct architecture.
Looking to the future, “big data” is the next frontier, a consequence of its unavoidable 3Vs (velocity, volume, variety). With almost every device producing geolocation data, for example, Usage Based Insurance products are within the reach of small to mid-size carriers. Forward-looking carriers are already offering coverages that integrate geolocation data, for premiums varying as a function of how a vehicle is actually used.
Regarding the integration of BI into mobile applications, we are at the stage where house hunters, real-estate agents, and insurance agents are pulling up risk-maps of neighborhoods on mobile devices. It is small scale for now, but it can only grow. Intelligent devices such as smartphones and tablet computers as data sources must also be noted.
These factors open up the potential for developing new insurance products. No one knows what these will be with any amount of certainty. The social and economic ground is shifting. Despite uncertainty, one take-home message is clear: not every piece of data is relevant.
Predictive modeling and predictive analytics leverage data mining to semi-automatically correlate datasets. Instead of a purely analytical approach, the predictive model says “X correlates to Y in so many cases.” The ability to accumulate massive amounts of data adds further challenges to the development of predictive models. In farm insurance predictive modeling, for example, it will soon be necessary to make use of information relating to local weather trends and events rather than relying on macroscopic models.
[For an example of granular data-driven underwriting for farm insurance, read our article about MetroMile's co-founder's Climate Corp's parametric underwriting approach in New Auto Insurer MetroMile Competes on Mileage Data.]
Market consolidation has created a smorgasbord of systems with incompatible data. Carriers are in the position of being data rich, but information poor. We want them to be information-savvy. Ideally, you spend 90% of your time analyzing the data, not assembling it.
If your BI has attained maturity, and you have noted the ground has shifted socially, economically and technologically, then what are you to do? That’s the point of the gap analysis — it answers the basic question “where do we go from here?”
About the Author: Philippe Torres is a founding partner of InEdge, a consultancy specialized in Business Intelligence and Analytics for the insurance industry. Prior to co-founding InEdge, Philippe's over 25-year career has included work at Sybase and then Sun Microsystems as a Solutions Architect. He has unique expertise in the areas of analytical solutions, the personal lines and general insurance industries, as well as R&D in the field of data warehousing and business intelligence.