When a Victorian solicitor submitted AI-generated case citations to court without verification, he didn't just lose a case - he lost his ability to practice independently. For two years, Mr. Dayal must work under supervision, stripped of his authority to handle trust money or operate his own practice.
His mistake? Trusting AI without understanding how it works.
Wenee Yap, a Dean’s Commended lecturer whose teaching has ranked in the top 4 of all UTS subjects (based on student feedback), has witnessed the profession's dangerous knowledge gap firsthand. She's now leading a groundbreaking AI literacy program for the College of Law and its leadership arm FrontTier. The aim: to prevent the next generation of lawyers from becoming cautionary tales.
The Dangerous Gap: Don’t become the next Dayal, or Deloitte
Wenee Yap, adjunct lecturer and Director of AI Literacy at 43° Below, calls it "the dangerous gap" - the space between AI adoption and AI education.
"We're seeing lawyers rush to use GenAI tools because they're under pressure to be efficient, to stay competitive," she explains. "But they don't understand that GenAI is a probability machine, not a thinking entity. It doesn't know truth from fiction. It just predicts the next most likely word."
"Without proper training, lawyers are facing career-compromising consequences and exposing their firms to catastrophic risk."
The problem is, AI makes convincing mistakes…
The statistics reveal a profession at risk. 79% of Australian lawyers are already using AI in their practice — a 315% increase from 2023 to 2024. Yet the majority lack structured education to evaluate AI's reliability and risks.
The consequences are mounting. Deloitte Australia faced significant penalties and refunded part of a $440,000 government contract after delivering an AI-assisted report containing fabricated citations. In Murray v Victoria, a junior solicitor's failure to verify AI-generated sources cost her firm indemnity costs.
"The problem is that AI hallucinations are baked into the design of AI,” Yap explains. “Early benchmarking of Generative Pre-Trained Transformers prioritised generated answers over accuracy. Hallucinations are a feature, and they’re a bug. Like the loudest fool in a room, AI is so confidently wrong.”
The problem isn't that AI makes mistakes. It's that AI makes convincing mistakes.
"GenAI is a probability machine," Yap says. "It assembles tokens based on patterns, not truth. That plausible-looking citation might be entirely fabricated. It's like sailing with instruments that occasionally show you're in Sydney Harbour when you're actually heading towards Antarctica."
Failing to understand how AI works is the crux of why so many lawyers around the world are facing fines and disciplinary action over AI.
“AI lies. If you expect lies, you’ll expect to sense-check and verify anything AI tells you. That’s what correct use looks like. This is the heart of AI literacy.”
Your ethical and professional obligations under AI
For Chantal McNaught, co-lecturer of FrontTier’s AI literacy program and Director of Ethics and AI Practices at 43° Below, the issue goes beyond technical competence. Fundamentally, it's about professional responsibility.
A PhD Candidate in Law at Bond University researching how lawyers navigate the conflicts between Law as a Profession and Law as a Business, McNaught brings over 15 years experience in legal practice and legal technology to the AI literacy program. She recently appeared on TVNZ Breakfast discussing legal AI and sits on the Law and Technology Committee at the Law Association of New Zealand.
"After meeting tens of thousands of lawyers across two countries, I've seen how ethics and technology adoption intersect," McNaught says. "Lawyers have fundamental duties: to the court, to clients, to the administration of justice, and ultimately to the rule of law. These duties don't change because you're using AI. They intensify."
McNaught points to the mounting judicial responses. Professional bodies and courts across the world are issuing urgent warnings. The NSW Supreme Court has formal requirements for AI disclosure. The Victorian Legal Services Board has issued explicit warnings about disciplinary consequences.
"Misuse is a competency issue, leading to findings of conduct rules breaches, costs orders, and variation of practising certificates," McNaught explains. "Yet hands-on, practical training that is not vendor-focused remains virtually non-existent."
As a founding member of The AI Assembly—a not-for-profit organisation delivering support and education for the intentional and ethical use of AI by small to medium-sized enterprises – McNaught understands that ethical AI use requires more than good intentions.
"You cannot delegate professional judgement," she says. "If you lack competence to verify any legal reasoning output independently, then you lack competence to use AI for that task. Period."
Much of this knowledge forms the foundation of FrontTier’s AI literacy training program. Unlike other training, it is tool-agnostic. In other words, it’s useful regardless of whether you use ChatGPT, Copilot, Claude, Harvey, VincentAI, Legora, or any other suite of AI tools.
Think of AI in the same way you would a hammer
The College of Law and its leadership arm, FrontTier, identified the need for a refreshingly practical approach to AI literacy and its program. Rather than treating AI as either saviour or threat, it teaches lawyers how to identify appropriate use cases.
"AI is a hammer," Yap explains. "A powerful, sophisticated hammer, but still just a hammer. Your job isn't to figure out whether hammers are good or bad. Your job is to find the right ways to use the hammer.”
Designed to be immediately useful to lawyers, the AI literacy program will cover core AI concepts, prompt engineering under Anthropic’s 4D AI Fluency Framework, cognitive psychology, data security, and risk management.
Its core message is simple: never trust, always verify.
"GenAI never hesitates. Never uses hedging language," Yap says. "It writes with confident clarity. Human psychology interprets this fluency as accuracy. Smooth and confident writing make our brains assume information is correct. It’s why liars who lie with confidence succeed so often. Believing AI without verifying its output is where careers get compromised."
The program teaches lawyers to engage what cognitive psychologist Daniel Kahneman calls "System 2 thinking" — slow, deliberate, analytical thought — whenever AI is involved.
"When AI is involved, you must engage System 2 thinking," Yap explains. "GenAI is specifically designed to produce output that triggers System 1 thinking — fast, intuitive, trusting. Your fast brain sees polished legal writing with proper formatting and thinks 'This looks fine.' But that plausible-looking citation might be entirely fabricated."
AI literacy is your competitive edge
For McNaught, AI literacy isn't just about risk management. It's about competitive positioning.
"The question every firm is asking is: 'How do we maximise the benefits of AI but safely so we don't end up as the next Deloitte?'" she says. "Firms need supportive, structured learning curricula, not ‘here’s AI, go be 50% more efficient.’ They need practical frameworks, not experiments or marketing."
The AI literacy program's flexible design allows lawyers at every career stage to participate—from recent graduates and students building foundational skills to senior partners developing strategic implementation frameworks.
"This program can be tailored to serve multiple legal audiences while maintaining the same core curriculum foundation," McNaught explains. "Whether you're a graduate entering practice or a partner making procurement decisions, you need to be GenAI fluent and that means understanding both AI's capabilities and limitations."
Lawyer Joshua Wong was among the first to try out FrontTier’s AI literacy course content.
“I found the differentiation between System 1 and System 2 thinking really interesting, and still quite relevant,” Wong says. “The framework tables and questions were practical too. It’s definitely very useful.”
For Yap, who pioneered AI literacy at UTS Faculty of Law and has consulted for major legaltech innovators, the urgency is personal. She's seen too many lawyers learn about AI's limitations the hard way.
"Your clients are already using GenAI," Yap says. "They're creating novel problems because they don't understand GenAI's limitations. But you do. Because you're doing this course."
McNaught agrees: "AI doesn't replace lawyers. It transforms legal practice. And with the right training, you're positioned to lead that transformation."
FrontTier’s CPD-eligible program launches at LawFest 26 on 11-12 March 2026. For more information or to get your early-bird 20% discount, register here.
"When in doubt, choose caution," Yap advises. "Keep learning. Slow down when AI is involved. And never forget—you're lawyers first, AI users second."
For lawyers who want to harness AI's power without becoming its next victim, the message is clear: the time to learn is now—before the courts teach you the hard way.