Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
Growing AI Investments Push Enterprises to Demand Accountability From Tech Vendors

Imagine paying a construction company to build an office tower, only to discover that the contract limits the builder’s liability to a single month of rent if the structure fails. In most industries, such an arrangement would be considered absurd. In enterprise technology, it is standard practice.
See Also: Why HSMs Are Critical to Digital Asset Security
Companies spent over $300 billion on artificial intelligence last year, yet most initiatives produced little measurable value. As skepticism grows, a new debate is emerging around accountability in enterprise technology contracts and whether vendors should share responsibility for outcomes.
For decades, enterprise buyers have accepted a peculiar commercial structure: when technology fails to deliver, the vendor still gets paid. In the age of AI – and increasingly in cybersecurity – a growing number of firms are having a metamorphosis in their thinking. Security leaders, in particular, are growing wary of long-gestation technology projects without assurance of the desired outcome.
The problem is not new. A 2002 KPMG study found that roughly half of all enterprise technology projects failed to meet their objectives. The finding caused mild alarm. A few white papers were written. Some conferences were convened. Then, enterprises bought more technology.
A generation later, the numbers have barely moved – they have simply acquired more zeroes. MIT’s Project NANDA, which surveyed hundreds of companies deploying generative AI in 2025, found that 95% of initiatives achieved zero measurable return on investment. Globally, enterprises directed enormous capital on AI tools and infrastructure last year. Much of that investment appears to have purchased dashboards, demonstrations and considerable internal optimism.
This is not primarily a technology problem. The real anomaly lies in the commercial structure of the industry itself. Enterprise software vendors routinely cap their contractual liability at the equivalent of a small fraction of the contract value. In cybersecurity, where tools are often marketed as critical layers of defense, the same contractual structures frequently apply.
That arrangement resembles a construction company charging full price for a building while contractually limiting its responsibility for whether the structure stands.
There is a historical parallel in the early decades of management consulting. For most of the 20th century, major firms sold advice on a time-and-materials basis. Clients paid for consultant hours regardless of what happened next. The consulting firm would depart, the report would sit on a shelf and the invoice would be settled. Firms grew large on this model. Their clients, occasionally, grew more profitable.
What eventually changed was pressure – from clients, markets and competition – and a handful of firms willing to link their fees to outcomes. Performance-based consulting contracts, once unusual, gradually became a standard feature of serious transformation projects. Clients who adopted them first observed a striking change in consultant behavior: consultants tended to give better advice when they were going to share in its consequences.
Something similar appears to be emerging in enterprise AI. A small but growing number of infrastructure providers are beginning to assume contractual accountability for operational outcomes – not in the diffused sense of uptime guarantees or service credits, but commitments tied to measurable business improvements: reduced incident response times, lower breach risk, remediation or demonstrable productivity gains in security operations centers.
These models remain uncommon, but they reflect a broader evolution in buyer expectations. CISOs and procurement leaders are no longer purchasing technology solely on the basis of capability claims. Increasingly, they are asking whether vendors are prepared to stand behind those claims with financial consequences.
Historically, this type of contractual accountability has often marked the moment when a technology market matures.
The cloud computing market, for instance, only scaled globally once vendors accepted uptime commitments serious enough to concentrate attention. Before that, enterprise buyers struggled to trust the infrastructure. Contracts without teeth produced systems without consequence.
The next phase of enterprise AI and cybersecurity adoption may depend on a similar correction in incentives.
There are reasonable grounds for caution. Technology booms have a long history of overpromising before they begin delivering durable value. The railway mania of the 1840s eventually produced the infrastructure that transformed industrial economies. It also produced fraud, spectacular overbuilding and decades of structural inefficiency. The eventual usefulness of a technology is not necessarily a compelling argument for the particular investment decisions made along the way.
The enterprise AI accountability era may follow a similar arc. A pivot toward commercial incentives – undertaken by firms with the architecture, operational maturity and financial discipline to support it – could meaningfully improve what enterprise technology can accomplish. But the transition will also attract companies eager to claim accountability without the operational capability to deliver it. For buyers evaluating vendors, distinguishing between the two may become the central challenge of the next technology cycle.
What the enterprise technology buyer can no longer assume, however, is that simply purchasing more technology will produce better outcomes. The lessons from previous waves of enterprise computing are consistent. The tools that ultimately reshape industries are rarely just the most sophisticated ones. They are the ones that someone, somewhere, was prepared to be held responsible for.
