It’s not uncommon for digital teams to carry some level of technical debt. Behind the modern, user-friendly frontends that many organisations have invested in over the last decade, there are often decades-old databases, complex integrations, delicate infrastructure and manual workarounds that only one or two people truly understand. Teams closest to these systems know this but technical debt almost always loses the prioritisation battle to the next new policy commitment or departmental priority.
The problem is that ignoring technical debt isn’t free. It’s deferred cost with interest. And as government ambitions grow around AI, data sharing and joined-up services, the weight of legacy systems becomes an increasingly serious constraint on what teams can deliver.
This post explores why technical debt accumulates, what it costs, and how teams can tackle it sustainably without stopping delivery.
Why technical debt accumulates
Technical debt builds up in every organisation, but several dynamics make it particularly stubborn in the public sector.
Funding in government is overwhelmingly project-based. There’s money to build the thing, but once a service goes live, it’s harder to get ongoing investment to maintain, improve or modernise it afterwards. The team that built it moves on to the next priority (or the supplier moves on). What’s left is a service or product that works today but gradually degrades as the world around it changes.
Governance processes make this harder too. The business case and funding mechanisms* that govern digital investment in government are designed for new projects with defined scope, timelines and benefits. They’re not well suited to the incremental, ongoing work of reducing technical debt. Writing a business case to refactor a database or replace an API layer feels disproportionate, but without formal approval, the work can’t get funded or resourced. (The government’s approach to funding digital is changing, which is encouraging.)
Political and organisational cycles reinforce this pattern. Visible launches get attention. A minister can announce a new service or investment in new technology. On the other side, you rarely hear announcements about a database behind an existing service being migrated to a modern platform, even though that work might have been harder and more impactful.
Teams are also stretched. Most digital teams are balancing the continuous improvement of existing services alongside delivery of new commitments, often with headcount constraints to do both properly. When something has to give, it’s usually the less visible maintenance and modernisation work. This creates a vicious cycle: the more debt accumulates, the slower delivery gets, the more pressure teams are under, and the less time there is to address the root cause.
Finally, there’s often no clear ownership of legacy systems. They sit between teams, maintained by whoever last touched them. Nobody is accountable for the long-term health of the technical estate in the way that a product manager is accountable for a service. This means problems are noticed but not owned, reported but not resolved.
The CDDO’s guidance on preventing technical debt and legacy now explicitly asks whether a legacy owner has been identified with responsibility for the technical health of a product or service, and whether funding has been set aside for future remediation. These are the right questions to be asking.
The cost of doing nothing
It’s tempting to frame technical debt as a problem for another day. There’s always something more urgent. But the cost of inaction compounds over time, and it shows up in ways that directly affect delivery and users.
Security risk increases. Older systems are harder to patch, harder to monitor and more likely to have known vulnerabilities. As cyber threats evolve, the gap between what legacy systems can withstand and what they’re exposed to grows wider. For systems that are not actively maintained, the risk of a serious incident rises year on year. It’s much more difficult to manage security during the life of the service when the underlying systems are outdated and unsupported.
Delivery slows down. Teams building new features on top of legacy systems spend a disproportionate amount of time working around constraints, understanding undocumented behaviour and dealing with fragile dependencies. What should take days takes weeks. This is often invisible to leadership because teams absorb the cost through longer delivery timescales rather than surfacing it as a discrete problem.
The GOV.UK team wrote about classifying and measuring tech debt, noting that technical debt becomes a problem when the cost of servicing it becomes too great and starts limiting the team’s ability to deliver improvements for users.
Integration becomes harder. The ambition for joined-up government services, data sharing across departments, and AI-enabled capabilities all depend on modern, well-structured technical foundations. Legacy systems with proprietary data formats, complex processing and limited APIs become hard blockers to these goals.
Supplier dependency deepens. When legacy systems can only be maintained by the supplier that built them, departments lose leverage, flexibility and the ability to bring capability in-house. The Technology Code of Practice is designed to help government avoid this kind of lock-in by encouraging open standards, interoperability and sustainable purchasing strategies.
Emergency remediation is expensive. When a legacy system finally fails, the cost of emergency response dwarfs what planned modernisation would have cost. And it happens at the worst possible time, disrupting live services, pulling teams off planned work and creating exactly the kind of crisis that erodes trust with users and stakeholders.
Practical approaches for teams
Tackling technical debt doesn’t require stopping everything and embarking on a multi-year modernisation programme. In fact, that approach usually fails. The teams I’ve seen manage this well treat debt reduction as a continuous activity of delivery, not a separate initiative.
Make technical debt visible
Most leadership teams don’t have a clear picture of the debt landscape in their organisation. They might know that certain systems are old or problematic, but they don’t have a structured view of what the risks are, what the costs of inaction look like, or where investment would have the most impact.
Teams should maintain a technical debt register alongside their product backlog. This doesn’t need to be exhaustive or perfectly detailed. A rough categorisation of known debt items by risk (security, operational, delivery speed) and effort gives leaders enough information to make informed decisions. If technical debt is invisible, it won’t get prioritised. Making it visible is the first step towards addressing it.
The GDS guidance on tracking technical debt offers a practical model for this. It uses two factors to classify debt: the current impact of the consequence and the effort required to remove the cause. It’s deliberately lightweight so it can be part of prioritisation discussions. This kind of approach makes debt a shared concern across the team, not just something that lives in the heads of developers.

Build debt reduction into every sprint
One of the most common mistakes I’ve seen is setting up a dedicated “legacy modernisation programme” that runs in parallel to business-as-usual delivery. These programmes are well-intentioned but they create problems. They compete with delivery teams for resources and attention and they reinforce the idea that modernisation is a one-off project rather than an ongoing responsibility.
A more sustainable approach is to bake a proportion of team capacity into every sprint for debt reduction. This might be 15-20% of capacity, depending on the severity of the debt and the team’s workload. The key is that it’s a standing commitment, not something that gets traded away every time a new priority emerges. Over time, this steady investment compounds and the technical estate improves without delivery ever stopping. Treat it as a service reliability and sustainability commitment, agreed with leadership, and reviewed monthly.
Modernise incrementally using the strangler fig pattern
The instinct when faced with a large legacy system is to replace it. Build the new thing, migrate the data, switch over, decommission the old one. This sounds clean but in practice, big-bang replacements in government are high risk. They take years, they cost significantly more than estimated, and they frequently fail.
A more effective approach is the strangler fig pattern: gradually replacing legacy components with modern ones at the boundaries, rather than rewriting everything at once. Each increment delivers value and reduces risk. Over time, the legacy system shrinks as more and more of its functionality is handled by modern components. The old system is eventually decommissioned not through a single dramatic migration but through a series of smaller, manageable changes.
This approach also means teams are delivering working software at every step, working in the open, not isolated for 18 months.
A good example of this is GOV.UK’s publishing platform. Rather than replacing the entire architecture at once, they built a publishing API and migrated content formats to it one at a time, starting with lower-risk pages.
Use brownfield delivery as a lever
In my post on building citizen-facing AI services, I talked about the importance of brownfield problems. Most opportunities for improvement in government aren’t greenfield. They’re about making existing services better.
Every time a team builds a new feature or improves an existing service, they’re touching parts of the technical estate. This is an opportunity. If the team is adding a new capability to a service that relies on a legacy API, that’s a natural moment to replace that API with a modern one. If they’re improving a user journey that passes through an outdated database, that’s a chance to migrate the relevant data.
This “leave things better than you found them” approach works because it ties modernisation to delivery. The work gets funded through the feature, not through a separate business case. The improvements are tested as part of the delivery process. And the team stays focused on outcomes for users while also improving the foundations.
Fund teams, not projects
This is a consistent theme across my writing and it’s especially relevant to technical debt. Long-lived teams with ongoing funding are far better placed to manage the health of their technical estate than project teams who move on after launch. A team that owns a service over time understands its quirks, knows where the risks are, and has the context to make good decisions about what to fix and when.
When teams are funded on a project basis, you make it very difficult to invest in the long-term health of the systems they build. What happens to the debt register when the project closes? Who owns the 15-20% capacity commitment if there’s no long-lived team? Funding teams rather than projects creates the conditions for sustainable stewardship of public services.

What leaders need to do
Technical debt is a leadership problem, not just a technical one. Teams can advocate for attention and propose approaches, but without leadership support, they’ll always be fighting against the prevailing incentives.
Protecting team capacity for debt reduction even when there’s pressure to redirect it is important. It means asking about the health of the technical estate as regularly as you ask about delivery progress. It means creating governance and funding mechanisms that support ongoing improvement, not just new builds.
It also means patience. Technical debt took years to accumulate and it won’t be resolved in a quarter. But organisations that manage their technical estate well move faster on everything else. They respond more quickly to emerging priorities. They integrate new capabilities like AI more easily. They spend less on emergency fixes and supplier dependency. And their teams are more productive and more engaged because they’re not constantly working around problems that everyone knows about but nobody is fixing.
Investing in the foundations isn’t glamorous work. But it’s the work that makes everything else possible. Make the health of the technical estate a standing agenda item, in the same way performance, risks and delivery milestones already are.

Leave a Reply