Firm Deploys Claude for Staff, Refunds Australian Government Over AI Errors

Deloitte will embed Anthropic’s Claude across its workforce despite flaws in a report from a government client that its analysts produced work with the help of generative artificial intelligence, costing the company thousands of dollars.
See Also: When Identity Protection Fails: Rethinking Resilience for a Modern Threat Landscape
The consulting firm reportedly wants to build industry-specific compliance products and AI agent personas for roles such as accountants and software developers.
The company’s announcement frames the partnership as a strategic, long-term investment in enterprise AI.
Australian officials the same day disclosed Deloitte will partially reimburse the government for a AU$439,000 – approximately $289,200 – contract after turning in a report for the Department of Employment and Workplace Relations prepared using AI. The original draft reportedly contained a slew of errors including made up academic references and an invented quote from a Federal Court judgment.
The “substance of the independent review is retained, and there are no changes to the recommendations,” a government spokesperson said.
Deloitte acknowledged using “a GenAI large language model (Azure OpenAI GPT-4o) based tool chain licensed by DEWR and hosted on DEWR’s Azure tenancy” to address “traceability and documentation gaps.”
University of Sydney academic Christopher Rudge described the disclosure as a “confession,” saying the consultancy had used AI for “a core analytical task.” Rudge told the Australian Financial Review that “you cannot trust the recommendations when the very foundation of the report is built on a flawed, originally undisclosed and non-expert methodology.”
The incident with the Australian report follows a string of recent examples in which organizations published AI-produced material that contained hallucinations or invented details. Media outlets and internal corporate tools have surfaced errors ranging from fabricated book titles on reading lists to erroneous citations used in legal proceedings. Anthropic itself faced scrutiny earlier this year after one of its chatbots produced a false citation in a legal dispute – the company’s lawyer later apologized for relying on an AI-generated reference.