01:01 PM
IT’s Record of Failure Is Worse Than You Think
Frank Wander, IT Excellence Institute
The headline of our recent review of Frank Wander’s new book assumed a “record of failure” on the part of IT. It’s only fair that such a claim be substantiated, and Wander, former Guardian Life CIO and founder of the IT Excellence Institute, does substantiate it in his book. The statistics are pretty damning, but Wander insists that the truth is even worse. In a moment we’ll share his argument, but first a look at his substantiation of the claim of IT failure.
[Read our review of Wander's book: What’s Driving IT’s Record of Failure — And What Can Fix It?.]
Wander’s indictment reaches back to the dawn of the Information Age, with a look at the unforeseen limitations and costs of IBM’s OS 360 in 1964. However, he carries the reader up to the present with a 2012 McKinsey study of 5400 projects across all industries valued at over $15 million. McKinsey found that 45% of these projects were over budget while delivering 56% less value than planned. The aggregate cost overrun was about $66 billion dollars, and 17% of the projects actually put the company at risk.
Wander ruefully concludes from his research that IT doesn’t perform well, that large projects fail or underperform regularly and that often the only debate is not whether they will be a success or a failure, but to what degree they will fail. The record revealed by investigations into IT performance, such as the McKinsey study, is bad. Unfortunately, Wander says, the truth is even worse. He asserts that neither corporations nor academics has an accurate picture of the financial magnitude of IT failure and cites five factors, which I quote here from “Transforming IT Culture: How to Use Social Intelligence, Human Factors, and Collaboration to Create an IT Department that Outperforms”:
1. The internal numbers are understated. The executives who initiated and sold many of the largest failures also shaped the perception of the outcome and the cost. Anyone in IT will tell you how many times he or she has seen a collective declaration of victory, when by any measure a development effort was a complete bust. Ultimately, reputations are at stake, and managing perception is critical. More important, if data are provided to external sources, they are collected, massaged, and then circulated for further correction and approval. Corporations always put their best foot forward, so lipstick is put on the pig before it is paraded in front of the viewing stand. Unquestionably, statistics are appropriately massaged prior to release. Otherwise, they are not provided at all.
2. Corporations broadcast successes, not failures. Companies are reluctant to share true failure numbers with anyone and do not collect numbers if they are bad. Why take the reputation risk, when the marketing machines run so well, and many industry awards go to efforts that ultimately turn out to be functional failures? Case in point: Two years in a row, the application winners of the Windows World Open went on to become legendary disasters. Both of these had required huge investments. When each of these solutions was selected, IT personnel within the respective companies broadly understood these systems to be the legendary failures they would become. “Successes” like these never get accurately classified—in fact, they remain successes because a retraction is never published. Last, every company has a corporate communications function that ensures that whatever is reported by the company paints the right picture.
3. Beauty is in the eye of the beholder. Expectations are so low that delivery is often construed as success. Not surprisingly, just completing a project creates a sigh of relief and a bright halo around the development team. Gauging success or failure through this distorted lens causes our industry to understate the magnitude of the problem. Do you think many of today’s projects would be classified as a success if expectations were high and success well understood? My experience leads me to a simple conclusion: Our collective potential is significantly higher than our results, so our relative success rate is therefore significantly lower than any number reported.
4. We cannot measure knowledge worker productivity. The software development industry cannot measure productivity, and every software development project is unique. Consequently, we cannot establish a performance benchmark at the beginning of a project and then compare actual performance to it at the end. So, what is a success? Everybody was extremely busy, and an end product was created? Very simply, the barometer of success and failure is engineered through expectation management and reporting. What remains is the comparative success of one effort versus another. You know the outcome was successful, but you don’t know if another group could have done it in half the time or twice the time. The evaluation is totally perceptual.
5. The opportunity cost of ignoring human potential is colossal. In terms of unproductive capital, we are conservatively talking in the range of a $100 billion dollars. So, when speaking of “success,” you have to put it in quotes. Unhappiness is a common result, perhaps even an expectation. It is only with great trepidation and uncertainty that a large initiative is funded by a business.
Anthony O'Donnell has covered technology in the insurance industry since 2000, when he joined the editorial staff of Insurance & Technology. As an editor and reporter for I&T and the InformationWeek Financial Services of TechWeb he has written on all areas of information ... View Full Bio