Accountable for every sloppy word
One cool trick to stop the word slop? Demand transparency when AI errors appear in documents that were meant to be written for people.

First it was the lazy lawyers who were too busy to write their own briefs. Oops. ChatGPT made a boo boo. Sorry. Remember that one guy in New Jersey? That was two years ago. He's been followed by a gaggle of lawyers (and even judges) who were so myopic they never heard about that go, or the next guy, or the next, and decided this was a great way to save a few hours (did they still bill the client?) and never pause to wonder if that very officious output was accurate.
Lawyers here in Australia are also getting caught out all the time. If they're using ChatGPT it's even worse – local laws are not especially well represented in the American text machine.
I can only hope the clients involved fired their lawyers and found representation elsewhere. But the legal system itself seems to require a more deliberative approach as to whether these errors in judgment are cause to see these representatives of the law disbarred. Wasting the time of the courts because you didn't want to waste your own precious time being diligent is not yet its own reason for facing the kind of penalty that would stop people doing it in future.
Consulting the language models
But what about when a company delivers AI-generated errors to a government in a fancy, expensive report? If you missed the news, Deloitte was paid $440,000 for a report into how to fix IT systems for welfare compliance. An academic noticed the report contained references to non-existent publications, a clear sign the report had allowed AI to write at least some of the work and did not perform thorough checks at the other end of the process.
At first, Deloitte claimed it stood by the report. Later, it conceded the errors, made an apology and gave a "partial refund" of almost $98,000. "Oversight processes were not followed on this occasion."
What is even more important in this Deloitte story is that the republished report may have updated references but errors remain. Not only in the syntax on the page but, more insidiously, some of the work cited, according to the relevant academics, bears little relevance to the report.
"Replacing completely fake footnotes with inaccurate footnotes that don't support the body of the report and have no relevance to it seems like an odd response to me," Macquarie University's Carolyn Adams told the AFR. "You'd fail them if they were a student."
You'd fail them if they were a student.
The same academic pointed to this amazing phrase that the citation of her work was meant to support:
"contemporary regulatory settings ordinarily embed flexibility, proportionality, and procedural fairness in processes that also support individualisation and responsive approaches to compliance, particularly in complex cases."
Adams described this as gobbledegook, typical of AI-generated sentences. "It looks like it should be meaningful, but it isn't."
Who are you writing for?
This seems to be something of a look behind the curtain of these kinds of massive reports. Almost no one reads the detail. Open the front. Read the executive summary. Read the introduction. Scan for the breakout boxes with spotlights and recommendations. Jump to the conclusion and recommendations. Feel the weight of those pages in hand and feel like someone showed their working. Very professional. Very important.
I do work with big clients on big documents from time to time. But at every turn I am writing to be read by people who want to understand what is being addressed. I am writing to be clear and understood not only by experts but by the kinds of people who want to know more than they currently do.
Are you writing to tick a box and meet a brief? Or are you writing to be read and deliver meaning and understanding?
Sloppy business
The business world has allowed a gap – an escape clause – to exist between failing to check the work of AI and being responsible for the work of AI. Pointing over at the AI is a mitigating circumstance for doing shoddy work.
Actually, we have the right word. Sloppy. Sloppy work. It fits the moment.
I absolutely see value in using AI 'deep research' modes that look up real information and help people go from zero to forty percent quickly, pulling together ideas and sources to build momentum on the way to getting good, real work done. But when errors land in the final work it suggests all those checks and balances people claim to take pride in and claim set them apart from lesser mortals are nowhere to be seen.
The old saying that the last 20% of the work takes 80% of the time? This should be more tangible than ever in organisations that deploy AI to speed their processes. The check phase, the polish phase, the collation phase. This time where the human is in the loop should be the definitive moment – because you and your organisation should never find itself in a position where they're making excuses about sloppy AI errors.
If you are deploying AI to support human work but running out of time to check the work of AI, you are cheating your clients.
If the checks are taking longer than never having used AI in the first place? Stop using AI.
The work must always be the responsibility of the people behind it. And accountability should follow. Transparency should form part of this approach to responsibility and accountability. If you are found to have breached this line by failing to manage AI inputs, you should have to reveal the work processes that allowed for this error to impact your clients.
In the case of lawyers, shouldn't this be worse than if they delivered these same errors because they were unskilled at the task? That the courts should demand they reveal what they've billed their clients in the process of failing them?
In the case of a major consulting firm, let us see if you had already made your recommendations before you fleshed out the research. Show us how you spent more time making the report bloat out to 400-pages, because that's what a $400,000 report should look like, than on careful review and analysis of the research ahead of reaching your conclusions.
Perhaps transparency would restore confidence? Perhaps it would reveal none of this was the case and it really was just a sloppy error? But when AI errors occur, the onus should be on the people who failed in the act of delivering real human-centred insights to show why they deserve more work in future.
I'm quite sure there are all kinds of business reasons and legal reasons why such foolish demands for transparency in exchange for sloppy work would never become a norm for industry. Feel free to tell me any you think of.
But we do often hear about how AI for business must have transparency and explainability measures in order to manage its risks. Perhaps if our suppliers are found to be using AI not as a support but as a shortcut they too should have a lot more explaining to do than a slap on the wrist or a partial refund?
Byteside Newsletter
Join the newsletter to receive the latest updates in your inbox.