When a small private plane crashed into the Belaire Apartments in New York's Upper East Side in October, it took several hours to verify that the occupants of the craft had been New York Yankee pitcher Cory Lidle and his flight instructor, Tyler Stanger, who tragically died in the accident. It only took minutes, however, for New York-based P&C insurer Magna Carta ($200 million in gross written premium) to verify whether it insured the location itself, and whether collateral damage might have affected other covered risks.
The juxtaposition of these events may seem harsh, but they are worth noting for two reasons. One is that insurers' capability to respond quickly to catastrophes -- on whatever scale -- is extremely relevant to their mission. The second is that Magna Carta's capability represents an important leap forward in insurers' grasp on location information and the ability to rapidly process it.
Location always has been fundamental to underwriting insurance policies, and, historically, insurers have been among the more sophisticated users of location information. But advances in geographic information system (GIS) technology and the drag of legacy procedures and systems have combined to make insurers laggards in what might be called location intelligence.
As in so many functional areas of the business, manual processes are a large part of the problem. Underwriters have long worked with an array of solid geographical factors to determine risk, such as ZIP code, fire protection district and hydrology. But in the world of manual processes, the number of these factors that can be adduced is relatively limited. "You can only get so many factors on a page at one time to create a price or get an accept/reject underwriting decision," says Donald Light, a San Francisco-based analyst with Celent. "GIS can bring a lot of other factors in on the fly at the underwriting stage, or at the price development phase when the actuaries are trying to figure out the rate method."
Major disasters, both natural and man-made, have pushed limited success in the use of more-sophisticated mapping technologies to manage the concentration of risk. Since Hurricane Andrew in 1992, insurers have worked with GIS vendor partners to better understand regional distribution of risk, and the man-made catastrophe of Sept. 11, 2001 has driven more-precise use of such technology. An insurer may decide that being within as little as 100 yards of a potential terrorist target, such as the New York Stock Exchange, is a critical underwriting factor. "GIS lets them do that," Light remarks. "Manually applying such rules is cumbersome, and the difficulty increases as you multiply examples."
Carriers that take advantage of rules-based proximity and aggregation analysis have a leg up on more than just monitoring risk concentration and underwriting individual risks, according to Craig Bedell, director, insurance vertical, MapInfo (Troy, N.Y.). "Subscribers to rules-engine technology can write and execute more-powerful location-based rules, and that goes a long way to further straight-through processing and exception underwriting," he says.
Anthony O'Donnell has covered technology in the insurance industry since 2000, when he joined the editorial staff of Insurance & Technology. As an editor and reporter for I&T and the InformationWeek Financial Services of TechWeb he has written on all areas of information ... View Full Bio