computerweekly.com

Data Engineering – Hidden dangers in the dash to AI

This is a guest post for the Computer Weekly Developer Network written by Rowan O’Donoghue, chief strategy officer and co-founder at Origina.

O’Donoghue writes in full as follows …

In 1962, NASA lost the Mariner 1 spacecraft 293 seconds into the launch because of a missing hyphen in the code. The cost? $150 million back then and $1.58 trillion in today’s money. Writer Arthur C. Clarke called it “the most expensive hyphen in history” and six decades later, we’re still making the same mistakes – just at a far grander scale and with far greater consequences.

Take a leading global bank’s recent predicament.

Their development team was forced to upgrade their compiler – not because it was broken, but because the vendor withdrew support for the previous version. “Compilers don’t break,” their tech leader told me, exasperated. “We’ve built all our applications on this version. Now, we must revisit and update everything.”

It’s a perfect example of the needless disruption plaguing our industry. When developers are forced to refactor code, they pull teams away from genuine innovation, create weekend upgrade cycles, risk downtime and potentially impact thousands of customers. Just ask Barclays about its recent outage – horrifically timed for those trying to pay taxes to HMRC.

Breaking free from the hamster wheel

The pressure to implement AI solutions rapidly has only amplified this chaos. A significant US healthcare provider shared its upgrade nightmare with me: they spent an entire year upgrading their Windows Server infrastructure, shelving critical projects to do so. Three weeks after completion, they were toldthat version was already outdated.

They’re caught in an endless cycle, like painting the Forth Road Bridge – by the time you finish one end, it’s time to start again at the other.This nightmare scenario isn’t just about technical debt. It’s about the opportunity cost of innovation. Speaking with an executive in a global Insurance company recently (after he finished colourfully describing some vendors as “software terrorists”), he revealed that 90% of his 300-strong team spend their time “keeping the lights on” – mainly dealing with continuous upgrades and vendor tactics.

The regulatory framework compounds this pressure. When TSB experienced an outage linked to rushed migration from older IT systems, the CEO lost his job and the bank had to pay nearly £370 million. This illustrates how prioritising speed over quality in technical migrations can be catastrophic. Fear drives decisions, creating a perpetual upgrade cycle that delivers little genuine value. It’s time to get off the hamster wheel.

Building resilient systems for AI

Consider the actions of a large global media company. When the compliance team flagged to us that their TLS crypto standard was deprecated, the knee-jerk reaction was an expensive reinstatement with the vendor and a complete upgrade.

Instead, we challenged their thinking and asked: “What if we could develop the new standard into their existing system?” By developing that specific capability rather than forcing a major upgrade, they maintained stability while meeting security requirements.

Even when you identify a new feature requirement, it’s worth exploring whether there are alternative ways to achieve the same result. The prevailing assumption that only the original equipment manufacturer can provide updates is often incorrect.

The solution starts with a simple principle: upgradeonly when you’ve identified a genuine need for new features, not because of vendor pressure or theoretical future requirements and certainly not just to maintain support.

I often hear technical leaders say: “But we might need new features in three years.” This fear of the unknown leads to unnecessary maintenance contracts. However, the reality is that when a vendor announces a feature, it typically takes a year before general availability, another year for early adopters to find bugs and a year for implementation planning. By that time, you could have saved significant costs and still have the option to return to the vendor if truly necessary.

When evaluating upgrades, ask hard questions. What genuine business value will this deliver? What’s the actual cost beyond the obvious implementation expenses? And what is driving it?

Security and regulatory compliance remain the primary concerns for many organisations, but policy paralysis prevails, fuelling the vendor path. Almost all known vulnerabilities can be mitigated through proper hardening procedures rather than version upgrades and application of patches. When a large retailer faced pressure to upgrade their entire stack for a single security requirement, they instead identified the vulnerable component and implemented targeted mitigations. The result? It has the same security outcome, a fraction of the cost and minimal disruption.

The ripple effect

Let’s return to our banking example. The impact cascaded through hundreds of applications when forced to upgrade their compiler. Each needed testing, many required code changes and all demanded careful deployment planning. The project consumed resources that could have driven innovation elsewhere.

But they learned valuable lessons.

Origina’s O’Donoghue: The (software upgrade) path forward requires courage to stop, think critically & explore alternatives.

They now maintain a comprehensive dependency map, making it easier to challenge vendor upgrade pressures. They’ve developed criteria for distinguishing between genuine technical necessity and vendor-driven changes. Most importantly, they’ve created a framework for calculating the cost of upgrades – including opportunity costs and potential risks.

The challenge isn’t technical – it’s psychological. We’ve been conditioned to accept this upgrade treadmill as inevitable. Every business I speak with at the C-level operates in a cost-competitive landscape. Those who don’t break free from this cycle risk becoming dinosaurs, causing unnecessary disruption and brand damage along the way.

These aren’t “legacy systems” we’re talking about – I despise that term. These are mission-critical assets powering essential services. There’s no such thing as a “best before date” for software. The problem isn’t the technology; it’s our acceptance of vendor-dictated obsolescence.

The path forward requires courage. It demands that we stop, think critically and explore alternatives. Be curious. Question the status quo. Your technology roadmap should be dictated by you and your genuine business needs, not vendor schedules.

Trillions of troubles

Poor quality software now costs US organisations over $2 trillion annually. Every unnecessary upgrade adds to this figure, introducing new vulnerabilities while solving problems that often don’t exist. As we build the data systems that will power the next generation of AI and analytics, we must ensure we’re not sacrificing long-term operational resilience for short-term compliance.

After all, if a missing hyphen can destroy a spacecraft, imagine what unnecessary upgrades are doing to your business.

Rowan O’Donoghue is Chief Strategy Officer and co-founder of Origina, a leader in extending IT asset life, reducing technical debt and optimising IT costs. With over 30 years of experience in planning, directing and implementing complex IT and operational excellence initiatives, he is continually pushing boundaries to help organisations take control of their software roadmaps.

Read full news in source page