
(Photo: LinkedIn)
Deloitte, one of the world's "Big Four" auditors, has been accused of using artificial intelligence (AI) to compile reports containing false information. The incident has not only shaken the firm's reputation, but also sparked a global debate about the responsible use of AI in a field that is based on honesty, accuracy and trust.
The "fall" of the leader
According to information from AP News (October 2025), Deloitte Australia has agreed to refund the Australian government part of the contract value worth 440,000 AUD (equivalent to about 290,000 USD) after discovering that the report sent to the Department of Labor and Employment Relations (DEWR) contained non-existent legal citations and fake academic papers. An internal review by Deloitte later confirmed that part of the content was generated by the "Azure OpenAI" tool - Microsoft's AI model.
The incident immediately created a chain reaction, causing experts and management agencies to simultaneously warn: technology is advancing faster than the legal framework and human supervision capabilities. CFODive commented that this is "a wake-up call for the entire corporate finance sector", because of the potential risks from letting machines participate in a process that requires absolute precision.
In fact, Deloitte is one of the pioneers in applying AI technology to the auditing process. The group has invested more than $1 billion in digital transformation, with a commitment to leveraging the power of big data and machine learning to increase efficiency and analytical depth. According to Financial News London, in the UK, more than 75% of Deloitte's auditors have used an internal chatbot called "PairD", a three-fold increase from the previous year.
AI helps auditors process huge volumes of data, extract information from thousands of pages of contracts, detect anomalies and save hundreds of hours of work. However, the incident at Deloitte Australia shows the downside of this process: when the technology is not closely monitored, AI can create “fiction” content – information that appears reasonable but is actually completely false.

(Photo: Getty Images)
According to the investigation report, Deloitte’s 237-page report cited an Australian federal court decision that never came to light. Some of the references in the appendix also did not exist. Only after a government agency scrutinized and questioned Deloitte did it admit that AI tools had been used in the compilation process. Although the company asserted that “AI only played a supporting role,” the incident significantly damaged its brand reputation and raised questions about the transparency of its work process.
The issue here is not just a technical error, but one that touches the very roots of the auditing profession – which is based on social trust. When one of the world’s four largest auditing firms makes an AI-related mistake, public confidence in the independence and ethics of the entire industry is shaken.
This impact is even more serious in the context of AI becoming a popular tool in other audit firms such as PwC, EY or KPMG. According to a survey by the Center for Audit Quality (CAQ), more than 1/3 of global audit partners said they have used or are planning to use AI in the audit process. This means that the risk of system errors, if not properly managed, can spread on a global scale.
Auditing in the era of artificial intelligence: Opportunities and warnings
From that shock, the auditing industry was forced to re-examine how it was moving into the AI era – where opportunities and risks intertwine. After the incident, Deloitte quickly announced the "Trustworthy AI Framework" - a system of guidelines for the responsible use of AI, focusing on five principles: fairness, transparency, explainability, responsibility and confidentiality. The company also expanded its global Omnia Audit Platform, integrating "GenAI" capabilities to support the analysis and reporting process. Deloitte affirmed that all results generated by AI must be checked and verified by human experts before being released.
However, experts say Deloitte’s efforts – while necessary – are just the first step in a long journey to adjust the relationship between humans and technology in the auditing industry. Experts at the Financial Times warn that many companies are “racing AI” without establishing clear control and impact assessment mechanisms. AI saves time, but at the same time blurs the line between human and machine work – especially in tasks that require judgment and professional skepticism, which is the nature of auditors.
PwC, another member of the Big Four, has publicly tweaked its training program for new hires: instead of performing basic auditing, they will learn how to “monitor AI,” analyze results, and assess technological risks. According to Business Insider, the firm believes that “the auditor of the future will no longer just be able to read numbers, but also understand how machines think.”

(Photo: Rahul)
Meanwhile, regulators and professional bodies are beginning to consider issuing new standards for “AI auditing.” The International Auditing and Assurance Standards Board (IAASB) is working on additional guidance on the use of AI tools in evidence collection and reporting. Some experts have proposed creating a separate auditing standard to ensure global consistency.
These developments show that the auditing industry is entering a period of profound transformation. Technology cannot be eliminated, but its application requires careful and strict control. Otherwise, the risk of losing control can cause the trust in the financial system - maintained for centuries - to collapse in just a few clicks.
From an opportunity perspective, AI promises unprecedented breakthroughs: the ability to process millions of transactions in a short time, detect sophisticated fraud that is difficult for humans to detect, and open up the concept of "continuous auditing" - monitoring risks in real time. Deloitte, PwC, KPMG and EY are all investing hundreds of millions of dollars each year to develop their own AI systems.
However, opportunities can only be turned into real value when accompanied by responsibility. The lesson from Deloitte Australia shows that technology can change the way of working, but it cannot replace ethics and human verification. In the world of AI auditing, trust is still the greatest asset.
From the Deloitte case, experts have drawn many important lessons:
- Absolute transparency in the use of AI: Any content created or supported by AI tools needs to be clearly disclosed to customers and regulators.
- Enhance training and oversight skills: Auditors need to understand how AI works, know the limits of the technology, and be able to evaluate the reasonableness of the results.
- Establish an auditing mechanism for AI: Not only financial data needs to be audited, but AI processes and models also need to be "re-audited" to ensure objectivity and transparency.
- Maintain professional ethics: No matter how much technology develops, the core principles of auditing remain "independence, honesty and objectivity".
Source: https://vtv.vn/nganh-kiem-toan-chan-dong-vi-ai-100251103192302453.htm






Comment (0)