As demands for competitive distinction grow more intense-and requirements for regulatory compliance more urgent-insurance companies need to be able to produce technology at greater levels of speed and quality than in the past.
Insurers' technology organizations have often relied on talent and dedication more than anything else to muddle through and get where they needed to go. But today's business conditions demand something less than a haphazard collection of human and technological capabilities and more like a well-oiled insurance-technology machine.
That ideal is a far cry from the status quo of many insurance technology organizations, which have had to deal with the innate complexity of the business, multiplied by the stacking of system upon system through M&A activity.
"There is so much regulatory control, and so much complexity wrapped up in policy administration, underwriting and claims processing, that IT departments are doing everything they can to shoulder the burden of maintenance, let alone making strides in the way the business applies technology," asserts Michael Jackowski, a partner with Chicago-based Accenture. "A lot of carriers are sitting on a rat's nest of technology. To be able simplify that would help them immensely."
Companies that have attempted a quantum leap into a new technological reality have often failed because of the sheer magnitude of the problem, Jackowski opines. As a result, business executives have often become even more averse to costly innovation attempts-a tendency enhanced by recent economic pressures. The dilemma, offers Jackowski, is "'How do I get off my legacy and onto a more productive set of technology?' at a time when no one is willing to assume the risk of projects that require many thousands of workdays."
In order to get past their present troubles, nearly 90 percent of insurance companies are looking to replace their policy systems over the next two years, asserts Andy Labrot, CTO, The Innovation Group (TiG, London). About a third of those are looking to component-based development (CBD) to make the transition. "It's not only how do you improve application development and delivery time, but also how can you use CBD to lower total cost of ownership in your back office," he says. "They don't have the appetite to build big systems from scratch anymore, they're looking to see how they can leverage what they already have that works."
However, earlier approaches to CBD suffered from focusing too intensely on re-use, at the cost of performance and scalability, Accenture's Jackowski claims. "In the object-oriented world we got too finely grained, and reuse became too brittle because there were too many dependencies baked in," he says. When hundreds of objects would reuse a core function, changes in base objects would ripple across systems, Jackowski explains.
Current CBD approaches involve creating a foundation for an appropriate level of component granularity and charting of business areas into specific domains of reuse. Crafting an accompanying strategy ahead of time to plan the reuse of components will build in greater efficiencies. "Starting to separate and get reuse out of enterprise-wide components is the first area where I see carriers starting to achieve economies of scale and greater effectiveness in their development environments," Jackowski observes.
Choice of development approach on its own is insufficient to tackling the productivity challenge, according to James Fridenberg, vice president, application development, Farmers Insurance (Los Angeles, $12 billion in assets). "No matter what kind of a shop you are, whether Web, mainframe or both, you've got to move to some kind of consistent, CMM [capability maturity model] Level 2 methodology, coupled with some standardized SDLC [software development life cycle]," he says. "It all comes down a standardized, repeatable process, where the business buys into it and you allow for planning several months in advance."
Prior to Farmers' adoption of standardized processes, Fridenberg recalls, the conditions that prevailed were like the Wild West: "We followed development, testing and implementation processes, but on a project-by-project, issue-by-issue basis. Aside from the very big projects, rarely did we look at the big picture of changes to our applications and systems."
About five years ago, Farmers moved from this ad-hoc approach to projects to the implementation of a release methodology that ensures monthly delivery of one of three concurrent releases on a 90-day software development lifecycle, for a total of 12 throughout the year. According to Fridenberg, the business and IT arrive at defined deliverables, which are then prioritized and scheduled through an in-house-built capacity model that tracks the disposition of development resources.
"We have an Access [from Microsoft, Redmond, WA] database where we store for the year what every major application's development capacity is, and as we plan the projects, we start filling up the 'capacity buckets,'" Fridenberg explains. "If I have batch work that needs to be done, I can fill up that bucket, if I have document work, I fill up that bucket, and so on."
Planning sessions take place on an almost weekly basis, packaging resources based on available capacity and subject-matter expertise. Through the model, claims Fridenberg, "I can tell you that every member of my team is working 100 percent of the time."
Fridenberg describes the carrier's technology organization as now operating as "a software factory," with a quasi-assembly line perspective on how to deliver code. "I think Mr. Ford had it right when he talked about productivity: If you get all the right quality measurements and checkpoints in place, the more you can streamline and the more efficient everyone is going to be," he argues.
Around the same time Farmers was implementing its release process, Blue Cross Blue Shield of Florida (BCBSF, Jacksonville, $3.5 billion total assets) adopted a factory concept. According to Ricardo Garcia, director of capability development factory (CDF), the carrier has four major factories: customer relationship management, enterprise resource planning, information management-which creates the data warehouse and decision support capabilities for the enterprise-and Garcia's own factory, which is responsible for all provider and customer-facing applications, including those relating to eligibility and claims development and processing.
The factory model resulted in higher productivity, but the carrier decided to take it up a notch by implementing Rational Software's (Cupertino, CA, a division of IBM, Armonk, NY) Rational Unified Process (RUP), Garcia says.
The RUP development methodology involves the execution of discipline-related tasks within a matrix of iterations carried out over a four-phase process-inception, elaboration, construction and transition-according to Eric Naiburg, group manager, industry solutions, Rational. "Iterative development implies having multiple points in time where you're checking in before you actually release your version, and potentially working in parallel on multiple versions of a software application," he explains. "So rather than waiting to the end to see that you missed some customer requirements and having to re-architect, it preaches an approach where you have check-ins at certain points as you're building."
RUP implementation at BCBSF began early last year with the adoption of UML (Unified Modeling Language) as the standard for diagramming and architecting systems. "From that point we started preparing a curriculum for training all the IT group, and to date we have trained 600, or about half," Garcia says. As the carrier's first factory to implement RUP, "we also started realigning [the CDF] internally to bring in the needed skill sets, and bring on board software architects and mentors to help promote the methodology."
A pilot was run in the second quarter of 2002, and the CDF engaged with the RUP early this year. "All new products in the factory are now done using the RUP, but projects started on our previous methodology will be continued under it," Garcia says. "We're not switching projects midstream."
The RUP organizes efforts according to an array of disciplines, including business modeling, requirements, analysis and design, implementation, test, deployment, configuration and change management, project management and environment. BCBSF currently is focusing on the business modeling, requirements and testing disciplines on the road to a complete RUP implementation.
BCBSF's previous methodology-which Garcia characterizes as a "waterfall"-involved different groups performing work successively and then "throwing it over the wall," he says. Analysis and function design of a project lifecycle would yield delivery of about 52 artifacts and around 800 pages of documentation. "Under the RUP we produce 12 artifacts and an average of about 120 to 200 pages of documentation," Garcia asserts. "That's a 75 percent improvement right there in the requirements discipline, analysis and design."
Garcia says that the "architecture first" emphasis that the RUP has resulted in a tremendous quality improvement, manifested by a lack of needed rework. "All the company's architects are working together at the beginning rather than waiting till the end, and our testing group is working with us up-front to help the design team," Garcia relates. "Those are among the contributions that are bringing tremendous value to the solutions we're providing to the business."
To improve IT's ability to deliver for the business at Nationwide Financial (Columbus, OH, $24.5 billion in assets), the carrier has developed what it calls a technology engagement model (TEM), according to Kelly Cannon, Nationwide's CIO for enterprise infrastructure. "Our focus has been on aligning the IT community on the most important business initiatives and projects, so in a sense, we're looking more at the productivity of the community as a whole than of individual developers," Cannon says. "We're trying to move away from the open and somewhat informal, heterogeneous environment we had in the past."
The TEM is built on the intersection of three areas with corresponding bodies and roles. "There's a strategy component, a technical component and then the actual application of technology," Cannon explains. "The strategy side is the 'why?,' the technology side is the 'how?,' and the infrastructure and application side is 'what?'"
A strategy council made up of representative members from business and technology is responsible for integration of strategy across the various business and technology silos in the company, Cannon relates. "Through experience we found we often did a good job of application strategy in a particular part of the business, but that rarely crossed over into other parts, and the skill sets involved in putting that strategy together tended to stay within the silo," he says. "The strategy council is a way to step above that."
A technology council staffed by IT membership with deep domain knowledge is charged with determining how to implement strategy. "We try not to presume what the underlying technology will be, but to describe what it is we want to accomplish in business and high-level technology terms, describing the application in question," Cannon.
Key to making the model succeed is a way of thinking about people, Cannon insists. "There is always a limited number of the highly qualified people who have the ability to make a significant contribution to any effort," he says. "We believe it's very important to think about where you want them."
Within the TEM, Nationwide has attached those people to the process in the roles of solutions architect on the strategy side, and domain architects on the technology side. On the infrastructure and applications side, technology engineers have responsibility for implementation. There is a very small number of solution architects across the enterprise, about six each for the P&C, life and infrastructure areas, and "they are typically members of the strategy council and play key leadership roles in the organization," Cannon says. Domain architects are members of the technology council and professionals who need to be "deep in a very broad technology space," according to Cannon. "They are responsible for directing the design of specific technology solutions, as well consulting on solutions." Technology engineers put the architects' vision into practice through a deeper understanding of the technology than the architects possess, Cannon says. "They are the people playing key thought leadership roles in the very big development projects, and there are generally more of them than the architects."
In the case of all three role categories, "we've decided on a fixed number that we need, and know exactly what their responsibility is," Cannon says. And through this deployment of critical skills, within the TEM model, he continues, "we attempt to understand the technology space and manage it in a rational manner, driving decisions about strategy and the use of technology with the thought leadership of some of the top technology people we've seen anywhere."
JOURNEY TO CMM LEVEL 3
Seeking to move to CMM (capability maturity model) Level 3 from Level 1, Allmerica Technology Services (ATS)-the technology arm of Allmerica Financial (Worcester, MA; $3.3 billion 2002 revenue)-leveraged an outsourcing partnership to get to its destination more quickly.
"We started looking at CMM late in 2000 as a means of instilling repeatable processes in our project management and software engineering practices," says Greg Tranter, vice president and CIO. "At the time, we also had a challenge to fill approximately 100 technology positions, which was not happening quickly-we had the budget, but not the people, and needed to deliver more with less."
ATS had been assessed at CMM Level 1 in 2001 and set a goal to achieve Level 2 in 2002. To address its capacity needs, it had turned to Boston-based Keane, Inc., beginning in 2000 and transitioned non-core work to the firm's near-shore facility in Halifax, NS, subsequently expanding the relationship in 2002, according to Tranter. "Since Keane was operating at CMM Level 3, we began leveraging their CMM framework and reset our target to achieving CMM Level 3 by year-end 2003," he says. If ATS succeeds, according to Tranter's reckoning, ATS's journey from Level 1 to 3 will have beat the average transition time by more than half.
Tranter relates that ATS's journey to CMM Level 3 is defined by four key phases:
Mobilization: Establishing teams for the program (e.g. Implementation, Engagement, and Communication), training the teams in methodologies and disciplines, developing program and project plans, and establishing a software quality assurance team (SQA).
Transition: During which detailed assessments and decisions regarding tailoring or modifying Keane's project management and software engineering processes and templates occurred. "During this phase, we also established several support groups: the Software Engineering Process Group (SEPG), the Software Configuration Control Board (SCCB), and the Project Office," Tranter says.
Institutionalization: ATS's current phase. "With the onset of this phase, software engineering teams began following the new project management and software engineering standards," Tranter explains. "The support teams [SQA, SEPG, SCCB, and Project Office] work with project teams and support continuous process improvement." At periodic points throughout this phase, ATS conducts "Quick Looks" to assess whether it is CMM compliant.
Certification: "Upon using the project management and software engineering standards consistently for three months [validated in the 'Quick Looks'] we will begin a formal, independent review process to determine that we are operating at CMM Level 3," Tranter concludes.
Anthony O'Donnell has covered technology in the insurance industry since 2000, when he joined the editorial staff of Insurance & Technology. As an editor and reporter for I&T and the InformationWeek Financial Services of TechWeb he has written on all areas of information ... View Full Bio