Insurance & Technology is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


07:08 AM
Connect Directly

Is Your Global Technology Infrastructure in Order?

Infrastructure considerations such as performance, capacity, scalability, and reliability are critical to an insurer's ability to deliver on its business strategies—especially in an increasingly global industry.

Security: Heightened Awareness

It goes without saying that security is a critical piece of the technology strategy for nearly every type of business. Apart from compliance concerns for security, data protection and privacy protection, insurers must protect themselves and their customers from fraud and unsolicited access to sensitive data.

Among the approaches to security are authentication and encryption. Authentication uses a user name and password or a certificate to validate the user. Encryption encodes each packet of information being transmitted via techniques such as secure sockets layer (SSL) and Hypertext Transfer Protocol over SSL (HTTPS).

While security is clearly necessary, it can also create performance bottlenecks. For this reason, it may make sense to reconfigure the points at which different security processes are executed. For example, a firm may use a Lightweight Directory Access Protocol (LDAP) server or an X.500 backbone to handle initial access and grant authorization to users. Access to individual applications can then be deferred to lower tiers within the infrastructure, while leveraging user information from the initial log-in.

The key question is: Just how much security does a company really need? To answer this question, IT management needs to take a good look at the firm's applications. For example, if in your environment you have large constituencies of users (such as different classes of customers, partners, and suppliers), more security and authorization is needed.

Conversely, if you can grant most users anonymous access (such as in a general-purpose Web site or an intranet site behind a firewall), you can safely eliminate much of your security at this level-as well as the bottleneck it creates.

Overall, we recommend that organizations establish reasonable security policies for the enterprise as a whole. Security must not be shortchanged, but neither should it be used unnecessarily. There is little need, for example, to encrypt data or encrypt transmissions on machines that sit behind two or three firewalls. Establishing a good security strategy means taking a good look at applications, system configurations and the types of users who are accessing the systems to understand how to configure security most intelligently—such that it protects corporate data without impacting performance.

Technical concerns should not be the only focus. Organizational issues such as those surrounding centralized and decentralized models, performance monitoring, and the availability of IT skill sets also need to be addressed.

For example, the pendulum has swung with every wave of technology: from mainframes to client/server; from data warehouses to distributed data marts. Whether you're talking about centralized staff, data or applications, the same issues and questions apply.

While promising in theory, centralized models usually don't make sense over the long haul. A centralized app-roach can last for a short period of time, but in most instances the approach doesn't meet the needs of the local market. Information that's appropriate for one region or nation may not work as well in another.

Keeping centralized information updated is also difficult, particularly in a global setting, because the central office may not be aware of restrictions in the local environments.

Conversely, while a decentralized model addresses local needs, it's also more expensive to operate, because people need to maintain multiple solutions, Web sites and resources in various regions or nations, rather than using a single, central pool of technologies and staff.

We recommend a hybrid of these two models. In other words, attempt to implement solutions and processes that enable central management of common information, but also provide distributed capabilities to ensure local needs are met.

This could mean, for example, choosing to deploy a content management solution that supports the ability to manage a consistent global brand while ""localizing"" Web content.

What's Scalable?

So how do you know that your infrastructure is scalable? What are the triggers that can be used to determine whether a company is experiencing bottlenecks or service problems?

The key is to monitor performance. An insurer must set up the infrastructure to monitor the performance of each component, all system resources and hardware devices, with monitoring tools to enable and identify performance bottlenecks, capacity issues and trouble spots.

Operating systems provide their own monitoring tools, and many applications now integrate directly with these monitors to expose application-specific parameters and performance data. In addition, many high-end systems include integration with enterprise management solutions such as Computer Associates' (Islandia, NY) CA-Unicenter or Hewlett-Packard's (Palo Alto, CA) HP OpenView.

A key to building a solid technology infrastructure is troubleshooting. Identification of the problem is only the first step. It's more important to provide the support organization with appropriate prioritization and escalation procedures to ensure the critical issues are addressed first.

Additionally, training and development of skilled technology support personnel is critical in all organizations. As an insurance company looks to build out its infrastructure, create distributed systems and implement new performance tools, management must make sure staff members possess the skills to support the enhanced environment. Keep in mind that an enhanced infrastructure means increasing the baseline requirement for new IT employees, potentially increasing the challenge of finding the right people. If you cannot train or hire the appropriate resources, you must consider outsourcing those necessary functions.

If time, money, and resources were unlimited, all organizations would have scalable infrastructures, capable of handling high throughput with great performance, while also providing untethered access, with impenetrable security.

But in the real world, constraints of time, budget and resources always play a role in technology decisions and IT management strategies. Plus, security and regulatory concerns must also be addressed-often at the expense of scalability or performance.

It's small comfort that all insurers face the same challenges as their markets become increasingly dispersed and as global competitive barriers vanish. Infrastructure considerations such as performance, capacity, scalability and reliability are increasingly important to a company's ability to deliver its business strategies. But the truth is, there are plenty of smart ways to architect and manage infrastructure. The key is to implement the technologies that provide the most ""bang for your buck,"" while also putting in place the appropriate tools and monitoring processes to evaluate an environment's performance.

Todd Hollowell is vice president of consulting services and Linda Andrews is a senior technical editor with Doculabs, a Chicago-based independent research and consulting firm that helps organizations choose the right technologies and strategies for e-business.


Integration Solutions

-- Transaction Monitors

-- Message Queue Servers

-- Brokers

-- Enterprise Application Integration (EAI)

-- Business-to-Business Integration (B2Bi) Components

Source: Doculabs

2 of 2
Register for Insurance & Technology Newsletters