Insurance & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


11:39 AM
Connect Directly

Improving the Quality of Insurance IT Applications, Part II: The Current State of Insurance IT Applications

In Part II of this three-part series Paul Camille Bentz, former CIO, AGF-Allianz, describes the current state of the insurance IT portfolio and why this makes it hard to achieve the requisite business goals of increasing revenue and margin through M&A and organic growth.

By Paul Camille Bentz, former CIO, AGF-Allianz

The business of insurance (property and casualty, health and life insurance) has always been IT intensive. IT plays an integral role in insurance products - in many cases it is the product. Hence, the insurance industry was one of the earliest industries to integrate IT into its business model.Starting in the early 1970s, the insurance industry has relied on massive applications programmed in COBOL and running on mainframes. By any measure, these applications are spectacularly successful - they continue to perform well under heavy transaction loads. So much so, that most companies tread very carefully when it comes to enhancing these applications. But business conditions continually push for such enhancement, which in turn have made the insurance IT portfolio increasingly complex.

• Competitive differentiation pressure drives the rapid development of new products targeted towards specialized market segments. This rapid development creates a mishmash of architecture, a constellation of interconnections and little useful documentation. • A patchwork of continually changing complex regulation leads to multiple variants of the same product, customized to comply with local regulations. • Different market segments dictate the need for distinct product lines with little overlap, creating silos of product lines and the IT skills necessary to support them.

These business conditions create IT portfolios that have a distinct signature: separate product lines, each running on its own infrastructure with its own specialized development and support teams. Within each of these product lines, considerable complexity is created by the multiple variants of the same product, the mix of new (.NET, Java) and old (COBOL) technologies, and a tangle of poorly-documented interconnections due to rapid and repeated enhancements.

Making changes to these highly inter-connected applications in response to business needs takes more time and costs more money than anyone can accept. Moreover, such changes escalate business disruption risks due to performance and security lapses.

The lack of expertise compounds the problem. Ideally these changes should be closely governed by the architectural roadmap. However, the reality is that these changes outpace the skills of technical and business SMEs who are organized along business silos, making it difficult to support inter-connected systems.

From the CIO's perspective, the proliferation of application technologies and inter-connections has damaging ripple effects. Not only does it drive up application maintenance costs, it also drives up the cost of the infrastructure on which these applications run. When infrastructure management is outsourced, the thresholds and targets outlined in SLAs become harder to enforce because variance from these targets occurs in large part due to complexities in the application-infrastructure stack, something that is outside the vendor's control. Vendor costs become significantly (and justifiably) higher, cutting into the cost savings of outsourcing.

Ironically, the same business forces that necessitate lower cost, higher revenue and higher profit margin drive higher cost and lower performance in the IT applications that are integral to achieving these business goals.

In the next section we see how software quality is critical to solving this conundrum.

To stop the Insurance IT engine from seizing up, we must be able to rapidly create and enhance products and open connections between previously siloed products, all without reducing service levels or increasing costs. The only way to do this is by focusing on application quality. A U.S. Supreme Court Justice famously said about pornography, "I don't know how to define it, but I know it when I see it." The same might be said for application quality. But without a clear definition of application quality we simply cannot achieve our business goals.

The Risks of Rapid Margin Growth: Software Quality Problems

The knowledge to develop, support and enhance these systems will take time to acquire, and responsiveness to business needs will be slowed. There is a significant increase in the risk of performance, security and stability, resulting in drains in cost and business productivity at a time when companies can least afford these losses.

The traditional approaches to managing these risks --- replace, wrap and integrate - are all made harder by these new activities. Let's take a moment to understand why these longstanding problems require new solutions.

Replace. Given the mishmash of systems of different ages, technologies and languages cobbled together over the years to run a product line, it's tempting to scrap it all and start again. In most cases this is unrealistic. Neither the business nor IT has the stomach, money or time to do this.

Wrap. A common way to squeeze new functionality out of old systems is to provide a service interface through which the old systems can be safely accessed without any need to touch the "guts" of the old application.

Integrate. Functionality is added or enhanced through connections of data, interfaces, logic and infrastructure with other systems.

No matter the approach, the main software quality problems in large, multi-platform, multi-language insurance applications are the following:

1. Lack of Documentation. There is very little useful documentation of existing systems. This considerably slows the delivery of new capabilities and makes their performance more unpredictable.

2. A Tangle of Inter-Connections. No single person or team can have an end-to-end view of the interconnections between application components, both those within the application and those extending out to other applications. These interconnections produce complicated interdependencies of data and functionality between application components. These dependencies can be sensitive to the time sequence in which they should occur, adding another layer of complexity to these inter-connections. This makes it very difficult to know if everything is working as it should.

3. The Inadequacy of Testing. Testing alone is insufficient to solve the quality problems caused by increasing complexity.

a. Testing is usually done too late to catch and rectify design bottlenecks that throttle application performance and responsiveness. b. Even when testing is done early and often, it will not be able to catch problems that span across platforms and languages. c. Nor will testing catch all the problems that arise due the context in which a component operates even though the component by itself is thoroughly tested and is of high quality. d. Contextual problems are particularly pernicious when the conditions in which the application operates (hardware, usage patterns, transaction volumes) change, or the software itself is changed (patches, configuration changes, minor enhancements). No amount of performance testing can reveal these problems - something more than performance testing is needed to evaluate the quality of application software.

4. Lack of a Quality Measure. Because organizations don't have a way to define and measure software quality, it's hard to know how to size the risk of quality problems, how to prioritize fixing them (you can't, and probably shouldn't, fix every single one), and how these fixes are contributing to the overall quality of the application (is it trending in the right direction?).

5. Lack of Expertise across Application Silos. Developing and supporting applications that cross technology and business process boundaries requires knowledge not present in the ways teams are usually organized and trained. The cost advantage of application or infrastructure outsourcing is quickly eroded by the extra hours that outsourcers spend in supporting these complex applications. Whether the organizational boundaries are internal or external, teams are not set up to coordinate effectively to develop and support these applications for three main reasons:

a. Incentives are misaligned. For example, the Infrastructure group is rewarded for stability, while the Applications group is rewarded for cutting-edge functionality and speed of delivery. b. Metrics are misaligned. For example, the Infrastructure group tracks availability and network latency as measures of performance, while the Applications group measures performance in terms of successful completion of functional and performance tests. The problem is that both sets of metrics can be "green", yet performance from a business-user's standpoint can be severely impaired. The lack of a shared language of performance blocks a true end-to-end view of application performance. c. Resourcing priorities are misaligned. When one team needs another to work on the application, the other team has different priorities.

Overwhelming as they seem, this long list of software problems can be solved with the right focus on quality. In the final part of this three-part series, I will explain how we solved these problems at AGF-Allianz with a focus on specific quality measures.

About the Author: Paul Camille Bentz joined AGF in April 2000 to head the IT organization following the merger of three insurance companies acquired by Allianz. The merger was followed by a rationalization program to reduce dramatically the costs of IT while delivering new solutions. He then served as Regional CIO for Allianz, advisor to the Chairman of AGF, and member of the Allianz Executive Board. Before AGF, Paul was CIO of Paribas, where he implemented a global organization with more than 2100 staff worldwide to support all the business areas of the Investment Bank. Paul also served in IT leadership roles for Credit Lyonnais and Air Liquide in several European countries over the course of ten years. He retired from Allianz in 2007 and now runs his own consulting company, while spending time with his wife, three children and three grandchildren. He can be reached at [email protected] .In Part II of this three-part series Paul Camille Bentz, former CIO, AGF-Allianz, describes the current state of the insurance IT portfolio and why this makes it hard to achieve the requisite business goals of increasing revenue and margin through M&A and organic growth.

Register for Insurance & Technology Newsletters