KPMG Fines Partner for Using AI to Pass AI Exam

A KPMG Australia partner was fined $7,000 for using AI to cheat on an internal AI ethics exam. CEO Andrew Yates calls AI regulation a "hard thing to get on top of."

KPMG Fines Partner for Using AI to Pass AI Exam
KPMG Partner fined

In a case that highlights the growing pains of the "AI Era," KPMG Australia has penalized a senior partner for using generative artificial intelligence to cheat on a mandatory internal training module—which was, ironically, about the ethical use of AI.

The incident has sparked a debate on corporate integrity as firms struggle to regulate the very technology they are selling to clients.

The "Tremendous" Irony: How It Happened

The partner, a registered company auditor, was completing a mandatory training session in July 2025 designed to teach employees how to use AI responsibly. According to internal reports, the partner uploaded the training manual into an external AI tool to generate answers for the final assessment.

KPMG’s internal monitoring systems flagged the activity in August, leading to a swift investigation.

The Financial Hit

The consequences for the unnamed partner were significant:

  • Financial Penalty: A fine of A$10,000 (approx. $7,000 USD) docked from future income.

  • Mandatory Retake: The partner was forced to retake the exam (this time, without AI).

  • Professional Scrutiny: The individual self-reported the breach to Chartered Accountants Australia and New Zealand, which has launched a separate investigation.


"A Very Hard Thing to Get On Top Of"

KPMG Australia CEO Andrew Yates addressed the breach with a candid assessment of the current technological landscape.

"Like most organisations, we have been grappling with the role and use of AI as it relates to internal training and testing," Yates stated. "It’s a very hard thing to get on top of given how quickly society has embraced it."

Despite the difficulty, Yates emphasized that the firm has invested heavily in AI detection tools and a firm-wide education campaign to ensure employees understand where the "red line" is drawn.

A Wider Trend of "AI Shortcuts"

This isn't an isolated incident. KPMG revealed that it has identified 28 cases of AI-related misconduct this financial year alone. While most involved staff at or below the manager level, the partner's involvement has drawn the most heat from regulators and politicians.

Greens Senator Barbara Pocock criticized the incident during a Senate inquiry, labeling the current self-regulation system as "toothless" and calling for stronger oversight of the Big Four consulting firms.

Case Type Number of Incidents (FY25-26) Primary Action Taken
Partner Misconduct 1 Financial fine & professional reporting
Staff Misconduct 27 Internal warnings & retakes
Total AI Breaches 28 Ongoing monitoring & education

Why This Matters for the Industry

The scandal is particularly awkward for KPMG, which recently negotiated fee discounts from its own auditors on the basis that AI would make the audit process cheaper and more efficient. The incident serves as a warning to other industries: as AI becomes a "daily-use" tool, the temptation to use it for "low-value" tasks like mandatory training is high. However, for professionals in positions of trust—like auditors—the shortcut could lead to a permanent stain on their career.

The Road Ahead: Stricter Controls

In response to the scandal, KPMG has announced it will:

  1. Publicly Disclose AI-related cheating numbers in its annual results.

  2. Block AI Access during specific internal testing windows.

  3. Strengthen Self-Reporting protocols to ensure all professional bodies are notified of breaches.

As AI continues to evolve, the "cat and mouse" game between those using AI to work and those using AI to watch them is only just beginning.