Heard of a recent IT project failure? No? Maybe you need to get out more.
If there is one topic on which many organisations don’t seem to need any guidance, it's how to run an IT project off the rails.
Ask any non-executive director what they fear most in their Board oversight and high on their list will be the risks associated with a major IT project.
Every year, there is a litany of projects that have run off the rails in one shape or another. And these are just the ones we hear or read about.
One of our IT partners, Paul Kallenbach, provided some commentary on IT project failures in the Australian Financial Review four days before Christmas. Paul has extensive experience in technology contracts and IT projects.
If you missed the article, here is what Paul had to say.
He says IT projects, more than others, tend to suffer from the sunk cost fallacy. That is, companies or governments push on with flawed IT projects – for fear of wasting more money – rather than scrap them, pivot to another approach, or just start again.
He says 65 to 85 per cent of IT projects fail to meet their objectives, run over-time or over-budget and often manage all three. The average cost blowout is usually between 50 per cent and 100 per cent. A further 30 per cent of IT projects are cancelled.
Remember the example of the National Health Service in Britain, where the replacement of a core IT system was supposed to cost £6.4 billion, and yet wasn't abandoned until costs had ballooned to almost twice that.
Paul says there tends to be more publicity for IT failures in the public sector because of accountability requirements for government. However, IT failures in private sector companies happen all too often too, but they are handled a lot more discreetly and usually resolved out of the spotlight and away from the courts.
"What happens with private sector failures is they tend to be dealt with behind closed doors," says Paul, "unless there is an obligation under the listing rules to disclose, but it's pretty rare to have a failure of that magnitude to be reportable."
The categories of IT failures
Paul says IT failures usually fall into one or most of the following categories: planning, people, processes, paper and probity.
He says the biggest failure is usually planning – known in the industry as "optimism bias" – where people are too optimistic about timelines and benefits and underestimate risks, costs and complexities.
"The hard thing about IT projects – as opposed to construction projects – is that IT is projected as all knowledge-based. You are not building something physical," he says.
"You have a set of business objectives which often express themselves at a higher level and quite often they change. This often leads to uncertainty and lack of clarity. You really are beholden to the knowledge, quality and attitude of the people involved."
Paul cites Sydney Water's IT disaster with its customer billing system in 2002 – it blew out from a $38 million project to a $64 million one before being abandoned completely with $61 million being written off (total cost $135.1 million) – as a classic example of lack of planning.
"The board approved the project with no proper IT architecture in place; the finance department didn't review the business case and the planning suffered from optimism bias where they attempted to combine 12 existing systems and more than 60 external interfaces," he says.
Sydney Water had a string of IT disasters including its customer management system stage one, whose original contract was $21 million, which blew out to $55.3 million, then the Maximo consolidation project to replace its current asset management system that started at $18.4 million and then increased to $40.7 million. All up, the final bill for Sydney Water's IT headache grew to more than $230 million – almost three times the original cost.
You can find many examples of troubled IT projects in reports published by public sector bodies, including the Victorian Auditor General, New South Wales Auditor General, Commonwealth Office of the Cyber Security Special Adviser and the Queensland Health Payroll System Commission of Inquiry.
One of the reasons people feel so helpless when it comes to IT is that we are left in the hands of the IT experts. Your average office worker knows the difference between hardware and software, but often not a lot more. So you are somewhat trapped by a sound lack of knowledge, where you are quickly out of your depth. As a result of your feeling of inadequacy, you tend to rely completely on the expert, something you wouldn’t do in other circumstances. Sometimes, it is the innocent, ill-informed question (as simple as a persistent ‘why’) that leads to an important discussion that may not have happened. Experts can become self-reinforcing.
This ability of even experts to fall into error is worthy of further examination.
As late as the 1970s, economic theory was based on the premise that humans and markets were rational. And that they were rational and logical in their decision making.
Psychologists have spent the last 40 years proving why this is not and never has been the case. This is as relevant in the decision making in major IT projects as it is in other forms of decision making.
A number of cognitive biases have been identified over the years that should be kept in mind whilst undertaking a major IT project:
We love to agree with those that agree with us. And we like to spend time with people who hold similar views. We can find it uncomfortable spending time with people who hold different views and feelings but listen to those who confirm our views back to us. This mode of behaviour leads to confirmation bias, the unconscious act of preferring only those perspectives that accord with our preconceived views, whilst ignoring or dismissing views (no matter how valid) that challenge our preconceived views.
In an IT project context, it’s much easier holding with a majority view that things are going well, and marginalising those that might be questioning the progress or strategic direction of a project – even when the latter group may well be (objectively) correct.
In-group bias is similar to confirmation bias. We are tribal in nature so being close to those in our inner group should not surprise. But what that leads to is feelings of suspicion and distrust for those outside our group. We can then overestimate the abilities and value of our inner group at the expense of people we don’t know as well.
For example, an external IT consultant who has been engaged to review the progress of a troubled IT project may be met with suspicion, or even worse, actively stymied, as their activities may threaten the stability or thinking of the project “tribe”.
We place a great deal of weight on past experiences believing it will influence the future. The classic example is a coin toss. If it comes up head five times in a row, we believe the odds of it coming up heads at the next toss are very high. Of course, the odds are what they were for each earlier toss: 50/50.
Related to this is the positive expectation bias – that our luck has to change and positive outcomes are around the corner.
These expectations (biases) are not supported by statistically likely outcomes. For example, in the context of major IT projects, despite many the project running significantly over-budget or consistently missing milestone dates, the project team will persevere (things will surely get better), with the consequent loss of millions of dollars (for example, in the case of Sydney Water’s failed CRM project) or even billions of dollars (in the case of Britain’s National Health Service’s failed core system replacement). In the markets, wise investors often say – the best loss is the first loss. This often applies to IT projects that have run off the rails. Don’t let your wishbone get in the way of your backbone.
Another example of this bias is last year’s ABS eCensus website failure. In his Review of the Events surrounding the 2016 eCensus, Alastair MacGibbon (Special Advisor to the Prime Minister on Cyber Security) found that reliance by the ABS on past patterns to guide future strategies simply didn’t work. He said:
The prevailing culture [of the ABS] can be identified in actions and decisions taken to prepare for the 2016 Census that date back to June 2012. Many seem innocuous, and almost all are compliant with established government practice. In many ways, the ABS is seen as an exemplar of established government practice: ticking the boxes, but not appreciating the challenges change presents.
This is when we purchase something and rationalise why it is what we wanted, regardless of how it has turned out. It’s an inbuilt mechanism that helps us subconsciously to justify any purchase, no matter how bad it was.
It’s related to hindsight bias, which compels us, after everyone knows the outcome, to believe we saw it coming all along.
This bias may be used, for example, to retrospectively justify the procurement of an over-specified or over-complex system or approach (perhaps because it’s perceived to be ‘future proof’ or bring other uncertain or intangible benefits) – when a much simpler (and cheaper) system would have sufficed. In this way, the project team justifies purchasing a Ferrari F1 – when a Ford Fiesta may have been perfectly fine. This in turn leads to (at best) delays in remedying problems that have emerged. At worst, it results in living with a problem rather than seeking a solution.
Status quo bias
Humans find change difficult. This can lead to choices that guarantee things remain the same or change as little as possible. It can also lead to unjustified conclusions that another choice may be inferior to the ‘status quo’ choice.
In IT projects, this can lead to an inability or unwillingness to ‘pivot’ away from a troubled IT project, even as it becomes apparent that the project may be costing too much, may not be delivering on its business or strategic objectives, or may be proceeding down the wrong technical path. Increasing the project team’s willingness to ‘pivot’ – or even abandon the project completely – is one of the reasons why Agile approaches to software development have become more prevalent in the past three years (although it would be a mistake to think that Agile approaches provide a universal panacea for IT project failures – but that’s the subject of another article).
Humans tend to focus more on negative news than positive news. We also tend to give more credibility to negative news. This impacts on our decision making.
For example, in the troubled Queensland Health payroll system implementation, the Commission of Inquiry Report found that too much credence was given to IBM’s (perceived) threat to sue the State and leave the payroll system unsupported. Commissioner Chesterman found that:
The real problem with such an approach [equating litigation with threatening the stability of the system] is that it attributes to one factor such a great importance so as to trump all others.
Humans love to conform and fit in. This manifests itself in many ways but one way is to ‘go with the crowd’, however large or small the crowd is. We find comfort in following the crowd, even if there is good reason not to (or no good reason to).
Perhaps best summed up, in the IT context, by the (overused) phrase, “Nobody ever got fired for choosing IBM”. Even if it’s not the solution you need or seek.
This is the tendency to be overly optimistic, where we overestimate the likelihood of favourable or pleasing outcomes. This in turn can lead to under preparation for what can go wrong in the planning stages.
This is a classic bias afflicting IT projects, where projects teams are overly optimistic about benefits and timelines, and at the same time underestimate the project’s risks, costs and complexities.
These cognitive biases are simple illustrations of what can potentially impact every key decision we make in major IT projects. Remaining vigilant to their potential impact on how your organisation approaches decision making is important.
Having regard to these biases, Paul has come up with a ‘tongue in cheek’ guide to creating an IT stuff-up. Follow it to the letter and you will stuff up your IT project.
Learn its lessons and you may manage to complete your IT project on time, within budget and in a form that delivers what you actually needed.
Paul Kallenbach's ‘oops’ guide to creating an IT stuff-up
- You asked them to do what?
Start by ensuring the supplier only has the most basic, superficial understanding of your needs. At all costs, avoid answering direct questions during the tender process, and do not allow them to audit you, as this may actually provide them with some understanding of the complexity of your business.
- You didn't agree to what?
Try to leave key issues, such as service levels or disaster recovery, to a very late stage in the negotiations, particularly if the project is critical to your business. That way, you'll probably be forced into signing a deal with key principles unclear or unsettled. If you're really fortunate, the supplier's performance in those areas will deteriorate immediately after you sign the contract.
- You mean they're supposed to make a profit?
Make sure you pressure the supplier into under-pricing the contract so that it can only ever operate at a loss. Also, make sure you lock them in at this price for five, maybe even 10, years with no benchmarking or price review mechanism. Then watch them flounder in that sea of red ink. That'll keep them focused.
- Who needs clarity?
Try to describe the services to be provided in as little detail as possible. Perhaps jot them down on the back of a napkin and staple it to the contract, or maybe just leave the contract schedules blank.
- Process? What process?
Spend lots of time developing useful and practical processes for managing scope changes, communication, project reporting, asset tracking, resource planning, early problem identification and dispute resolution. Then completely and utterly ignore them.
- Spoiling for a fight
From the very start, show them who's boss. If you've managed to negotiate a superior commercial position, exploit it. Be belligerent and obstructive. Don't compromise or be fair. If you're successful enough, very soon the environment will become so adversarial and unpleasant, key project staff will make a dash for the exit.
- Nothing less than perfection
Insist the supplier's pricing assumes absolutely perfect performance, leaving no margin whatsoever for any adverse events. After all, what could possibly go wrong?
- You think you own what?
Consider ignoring intellectual property issues. That way, important data or materials you once owned, or custom developments that you've paid for, will probably end up in the hands of suppliers. This will increase the chance that you'll be locked into that supplier, or have to make hefty payments for the return of materials you always thought were yours.
- She'll be right, mate
Concerned about the supplier's financial viability? Don't worry about asking for performance guarantees. After all, they're sure to struggle through.
- What do you mean you're leaving?
It's just far too much of an effort to think through when you may need to exit the relationship, or what you'll need to do at that time. So don't worry about it. After all, what are the chances of the supplier being bought, going broke or failing to perform? Or the whole relationship just going plain bad. And never address the process for transitioning personnel, data and know-how to a new supplier. That way, all the valuable things you've learned from the project will be lost and you'll be free to make the same mistakes over again.
There you go – 10 ways to ensure you stuff up your IT project. Do the opposite and you are well on the way to ensuring you don’t stuff up your IT project.
On the positive side, here’s another suggestion worth considering for complex IT projects, where most people are outside their comfort zone and dependant on others.
A few years ago, psychologist Gary Klein put forward a suggestion to enhance the decision making processes of leadership teams – what he called 'a pre-mortem", the opposite of a post mortem. It can usefully be adapted for IT projects.
This is what Gary said in an interview with Daniel Kahneman, a Nobel Laureate and a professor emeritus of psychology and public affairs at Princeton University's Woodrow Wilson School:
The pre-mortem technique is a sneaky way to get people to do contrarian, devil’s advocate thinking without encountering resistance. If a project goes poorly, there will be a lessons-learned session that looks at what went wrong and why the project failed—like a medical postmortem. Why don’t we do that up front? Before a project starts, we should say, “We’re looking in a crystal ball, and this project has failed; it’s a fiasco. Now, everybody, take two minutes and write down all the reasons why you think the project failed.” The logic is that instead of showing people that you are smart because you can come up with a good plan, you show you’re smart by thinking of insightful reasons why this project might go south. If you make it part of your corporate culture, then you create an interesting competition: “I want to come up with some possible problem that other people haven’t even thought of.” The whole dynamic changes from trying to avoid anything that might disrupt harmony to trying to surface potential problems….
Potential problems that you can then plan for and ensure don't occur.
Daniel thought the pre-mortem was a great idea: 'I mentioned it at Davos—giving full credit to Gary—and the chairman of a large corporation said that alone was worth coming to Davos for. The beauty of the premortem is that it is very easy to do. Usually, doing a premortem on a plan that is about to be adopted won’t cause it to be abandoned. But it will probably be tweaked in ways that everybody will recognize as beneficial. So the premortem is a low-cost, high-payoff kind of thing.'
Just another way to plan for success by recognising what can go wrong up front and preparing for it.
With IT projects, every failure is different yet the underlying causes are often the same. The lessons to learn are many but proper preparation will always be key.
Here are some starting thoughts for any pre-mortem you may undertake:
- Meticulous business case analysis and planning is a prerequisite to embarking on an IT project of any significance. Always start with the question ‘why’ and answer it in detail. Only then can you answer the question ‘how’. Once done for the overall project, break down the same analysis for each area.
- Senior executive leadership must be involved in the project – meaning that they must be aware of and understand what's happening, and must be prepared to tackle complex matters, ask tough questions and make tough decisions. It may also be advantageous to have a Board member, or an independent consultant to the Board, who has domain-specific IT knowledge and skills, and is able to scrutinise the project for value and for continued alignment with business objectives.
- The quality of project personnel (of both the supplier and the customer) is paramount – as is tight governance of the project.
- The roles and responsibilities of the supplier and customer need to be clarified from the outset – including ensuring that the strategic is not confused with the operational, and ensuring that there are no overlapping layers of accountability (increasing the risk that accountability becomes uncertain or confused).
- The project must be continually aligned with changing business requirements. This could be achieved, for example, by implementing a series of ‘gates’ for the project to proceed through, or by dividing the project into smaller sub-projects
- Processes should be implemented to ensure a constant communication flow between stakeholders, so that project ‘silos’ do not develop.
- Accurate reporting of risks to the project team and the Board must be undertaken – this is not just a 'tick the box' exercise. Bad news should not be hidden, omitted or obfuscated. No one and no group should be ‘managed’ by information flow.
- Senior management can positively influence project personnel behaviour – avoiding a culture of finger pointing, grandstanding and a fear of failure will reduce the risk that 'bad news' will be hidden.
- Organisational change needs to be carefully managed, in order that executive, project and end user buy-in and alignment is maintained.
- There cannot be a return on investment (ROI) for all IT projects. Sometimes implementing a new system is just a cost of doing business.
- Longer projects (multi-year) and bigger projects automatically have increased risk, including of people turnover.
- Don't buy a F1 Ferrari application when all you need is a Fiesta. Is there a cheaper competitor product out there? If so, why aren't you using it?
- It's important to have cloud strategy – but cloud is unlikely to be the answer to each and every one of the business' requirements.
- 'Off the shelf' is usually optimistic (desirable as it is to buy a proven product) – some level of implementation or integration will almost always be required. But when you ‘customise’, you are essentially (in one sense) buying an untried product. How much added risk that introduces depends on the extent of the customisation.
- Never believe anyone who tells you that an application is "infinitely scalable". Anyone who says that is an advocate for the supplier.
- Never believe anyone who tells you that a project introducing one new system or a series of related systems to replace / integrate 10 legacy systems is only a little harder. It is exponentially harder and more risky.
- Be ever alert for cognitive biases of the type described above in the decision making for the project, not in an accusatory way (they are natural human reactions) but so that you can compensate and adjust for them.
Following these tips should help you better undertake and manage a major IT project.
It is perhaps one of the hardest things a Board and senior management team can do, rivalled only by trying to change the culture of an organisation (of course, even that is required for many major IT projects).
Many organisations struggle. Hopefully, you can avoid the traps by working through these tips.
* Tips provided by MinterEllison partner and technology contracts expert Paul Kallenbach. Please feel free to comment if you have further tips or observations to add to this list!