Not all falsehoods wear a villain’s cloak—some arrive dressed in good intention, whispering with logic but missing the truth." — ChatGPT on AI Lies
Hey everyone,
In my last post, I wrote about the power and pitfalls of a simple apology. It got me thinking about another place we hear "I'm sorry" a lot lately: our AI assistants.
You know the drill. You're working with ChatGPT, Gemini, Claude, or another AI. You give it a task. It comes back with something... well, almost right. Maybe it hallucinates a fact, misunderstands a key instruction, or provides beautifully written nonsense. You point out the error, and what do you often get?
"You're right, I apologize for the mistake." or "My sincerest apologies again. You are absolutely right ..."
Sometimes it just makes me want to scream! It sounds polite. But how often does that apology lead to a real change in behavior? More often than not, you find yourself correcting similar errors just a few prompts later. The apologies feel hollow, like a programmed platitude rather than a sign of learning. It contributes to that feeling I call the "Almost Good" AI problem – the constant, frustrating gap between the incredible possibilities we see and the unreliable reality we get. It's death by a thousand papercuts: wasted time, constant corrections, and serious doubt about whether you can truly trust the output.
Why do these AI apologies often feel so meaningless? Because standard AI, out of the box, generally lacks the mechanism to do anything truly useful with our corrections long-term. It can say sorry, but without a clear framework guiding the interaction, it fundamentally struggles to learn from the feedback in a lasting way across sessions, nor can it proactively amend its approach to rebuild the trust that gets damaged with each error. It's often set up to repeat similar mistakes because the practical feedback loop is broken.
This reliability challenge isn't just theoretical; it was a major issue in my daily work. I use AI extensively as a collaborative partner – brainstorming and sharpening these SubStack posts, working on my second book, "My Perspective" (coming out in the next few weeks!), and mastering prompt engineering for the AI prompt library I maintain.
Reliability is my top concern because just as I'd start to trust the AI with a routine task, like adding tags to my notes, it would suddenly forget how to do it right. Imagine riding in an AI-driven car that occasionally forgets which side of the road to drive on!
Frustrated by this, my AI collaborator James and I have spent months wrestling with this exact problem. (Did I mention that I eat my own dog food?) We didn't just accept the apologies; we focused on building a system – a structured way of working with AI as a partner. This system takes our feedback, produces real-world results, and allows us to guide the AI to learn how to perform reliably. It involves defining our expectations clearly, providing context effectively, and using simple checks to ensure the AI stays on track.
What happens when you implement that structure? The need for apologies starts to fade because the errors themselves become less frequent. When corrections are needed, they become part of a feedback loop that genuinely improves performance for the task at hand, because the AI is operating within clear preferences and instructions. The interaction shifts from frustrating guesswork to a reliable partnership – one that keeps improving as we find and address more issues together.
This journey has led us to package these principles and practical techniques into something new we'll be launching very soon: "The Reliable AI Engine."
It's not a new AI model. It's a framework – including a practical guidebook, simple templates, and a core customization prompt – designed to install on top of the AI you already use (Gemini, ChatGPT, etc.). Think of it as adding the essential control system that turns raw AI power into consistent, dependable results.
We're putting the finishing touches on it now. If you're tired of "Almost Good" AI and ready to build a more reliable, productive partnership with your AI assistant, this is for you.
Want to be the first to know when "The Reliable AI Engine" launches (and grab an early bird discount)?
Subscribe to my SubStack, always free, and you'll receive notice when it's released. You'll also get my weekly SubStack post (and nothing else because I don't share my mailing list with anyone.) I'll send out a notification soon as it's ready in the next week or two!
Stay tuned, and let's move beyond the apologies to actual results.
That's My Perspective