I know the headline of this article isn’t fancy, but I’ve tried to do something here.
Initially, I thought of calling it “AI vs Me: Part II” — a gleeful nod to all the bad press AI has been getting lately.
But I didn’t want to start a series built on animosity.
There are two reasons for that. First, I’m not a confrontational person. And second, when robots eventually take over and scour the internet for evidence of what humans thought of them, I’d rather not offend their delicate silicon feelings.
The phrase “Another one on AI” is also, in a way, an acknowledgment of my near-unhealthy obsession with the subject. I could have asked a language model to generate a better headline for me, but if I, the supposed human author, can’t put in the effort to think, then what right do I have to expect creativity from a machine?
The Deloitte Case: An AI “Gotcha” Moment?
What inspired this reflection was the recent case involving Deloitte, which was asked to partially refund the Australian government for an error-filled $440,000 report that had been produced with the assistance of generative AI.
For many, it became a “gotcha” moment — proof that AI is unreliable and dangerous. But for me, it was oddly bittersweet.
I felt a pang of sympathy for the machine. Perhaps the same guilt one feels when the new kid at school gets teased a bit too much.
Why AI Hallucinates — and Why It’s Not Always Bad
The Deloitte report contained all sorts of hallucinations — fake book titles, invented legal rulings, and confident citations that pointed nowhere.
But hallucinations in large language models (LLMs) aren’t a flaw in the strictest sense; they’re a byproduct of what makes these systems engaging.
Think about it: if there were no hallucinations, the model’s responses would be painfully dull. Remember when Google Gemini refused to answer political questions altogether? It was technically correct but emotionally sterile — a rules-based robot stuck in a loop.
Hallucination is the price of reasoning, of creativity. It’s what allows LLMs to guess instead of giving up. Without that, interacting with AI would be like watching an assistant shrug and walk away every time you asked a complex question.
So yes, hallucinations can mislead — but they also keep us engaged. And engagement, as we all know, is what keeps these systems (and their parent companies) alive.
Humans Still Needed in the Loop
This brings us to the real issue: not that AI makes mistakes, but that we don’t check them anymore.
When Deloitte’s AI-generated report was delivered, where was the human oversight? Did no one read through it before submission?
That’s the problem. In our awe of automation, we’re forgetting our own expertise.
A few weeks ago, I made an error at work — quoted the wrong price for a commodity. I had checked the data manually, but still got it wrong. Luckily, my editor caught it.
That’s what human checks are for.
Why can’t AI-generated outputs be subjected to the same quality control process? The difference between my mistake and AI’s is that I know why I went wrong — I can retrace the scene: fatigue, distraction, and a lack of double-checking.
An AI model, by contrast, doesn’t know how it erred. It can’t explain what went wrong or why. Its “reasoning” is statistical, not reflective.
The Missing Confidence in Ourselves
In truth, large language models are only a step removed from rule-based systems — they just appear more human because of how they predict language patterns.
The danger isn’t in machines overreaching; it’s in humans underbelieving — in our ability to think critically and question outputs.
Somewhere along the line, we stopped trusting ourselves to make the final call.
A Reminder for the Humans
If Person A says it’s raining, and Person B says it’s not, your job isn’t to quote both. Your job is to look out the window.
If LLM A says something and LLM B disagrees, our job is the same — look out the window. Check. Verify. Think.
Because for all its processing power, AI still can’t tell the difference between rain and rumor.
And until it can, the responsibility for getting it right remains ours.
