04:09 PM
Simply Building a Testing Automation Framework Won't Save Your ROI
Automated software testing presents serious challenges to insurance companies. The more efficient the testing is, the more testing gets done and the fewer errors get into production. The less efficient the testing, the more time it takes, until the cost of automated testing exceeds the amount of money automation was supposed to save the insurance company. In that case, any return on investment disappears. Done correctly, automated software testing consists of three parts: (1) creating the automation architecture, (2) writing the business test scenarios, and (3) assessing the testing state of the software.
Step One: Create the Automation Architecture. Creating the right automation architecture is the most important step. Initially, that requires selecting and implementing the best fit functional software test tool. Literally hundreds of test tools exist on the market today, and many articles have been written about picking the right one.
Much less has been written, unfortunately, about the importance of configuring the test tool correctly. Most functional test tools are advertised as "plug and play," one size fits all. This simply is not true. In fact, most functional test tools require at least some degree of configuring and architecting for the application being tested. The better the test tool and the better it is configured to the application, the easier it is to build reusable and durable test scripts.
This is where insurance companies make a critical mistake. Instead of hiring people who know how to configure their functional test tools for their applications, insurers hire programmers to write code around the configuration problems. Programmers re-create the application code to create the test code. That is, they code twice. Coding twice means high maintenance costs because each code change to the application requires a code change to the test code. The programmers basically re-build the application. This approach has several additional downsides: (1) the scripts frequently are not reusable because of coding; (2) expensive, highly trained technical resources must build, use and maintain the code; (3) business people cannot easily utilize the automated test scripts because they do not have a technical background; (4) adequate testing becomes costly and time consuming; and (5) testing is done from the programmer's perspective, not the business's perspective.
If the functional test tool is configured correctly for the application, the test scripts are repeatable and reusable. Correct automation architecture goes well beyond just building an automation framework. Test scripts must be dependable across multiple version upgrades, without having to rework the code. Business people should be able to utilize the automation for their testing. Only then can the insurance company make sure that the software does what it is supposed to do -- by testing from the end user's perspective, not the developer's.
Step Two: Write the Business Test Scenarios. The next step is to create the test scripts to run with the automation. This requires working closely with the business people, who live with the application, to create a priority grid. The grid consists of ranking the most important to the least important business transactions in the application. For example, an auto insurance company that does 95% of its business with middle aged New England drivers purchasing $300,000 to $500,000 of insurance would want to make sure that it has tested this group thoroughly. Conversely, teenager drivers in Florida might be a low priority. The ranking system lets the user make sure that the high priority test scenarios are tested first. The user saves time and money by dedicating resources where they matter most. The rankings can be adjusted as time allows and as priorities change.
State of the art test tools can convert a single script into an array of test scripts by paramaterizing the data. For example, a health insurance company wants to test a variety of claim coverage scenarios for newborns. Instead of writing numerous scripts, the company can develop a single test script with a supporting data table, so that the script can be used to test hundreds of scenarios:
In some cases, a single test script can be leveraged to execute over 3,000 different test scenarios. Parameterizing the data makes testing faster and easier, increases application test coverage, and reduces the number of software defects.
Step Three: Monitor the Daily Test Work. After creating the automation and writing the business scenarios, the final step is to manage the testing progress. This is where a dashboard is useful. The dashboard is a test management tool that helps the user understand the exact state of testing at any particular moment on each application under test (AUT). The dashboard serves as a report card for how well the AUT has been tested to date, and what needs to be done now. It can be used to create a test efficiency rating, for "Go/No Go" decision support, for defect impact analysis, and for project test status. Examples might include:
* System status: The percentage of critical scenarios being tested * Potential system inefficiency: The percentage of built test assets addressing low risk scenarios * Asset execution status: The percentage of critical assets executed successfully
The dashboard is an excellent tool for tying software testing directly to the business the software is in, by making sure that the most important business transactions are tested first. The dashboard should be role based so that management can quickly assess the overall AUT status and individual testers know on a daily basis what test cases need to be worked on today.
Software test automation, when done correctly, can yield anywhere from a 35% to 90% return on investment in reduced testing time, depending on the amount of existing testing that is manual. The number of errors that get into production also should decrease by over 75%. If your company is not seeing these numbers, most likely the fault lies with the way the automation is architected or how the dashboard and business scenarios are utilized. Automated software testing is only worthwhile if it is saving time and minimizing errors.
About the Author: Jeremy Greshin is vice president for business development at Hartford-based Telesis, a provider of QA and software testing solutions. He can be reached at (860) 289-4504 [email protected].