AI hasn’t quite reached the promised land yet, but it’s danger close. In my last post, I experimented with using AI (Copilot in this case, using ChatGPT as the underlying AI) as my full-fledged writing partner. We looped through each step of producing the post. We even argued (respectfully) about my tagline at the end of the post. I have experience using AI for routine tasks, but this session surprised and worried me.
Prep work for this took much longer than just sitting down and writing the post on my own. However, it’s a one-time task that might save work and create better writing later. (If you’re interested, I’ve posted my “Master Preferences” prompt in Substack’s Notes section.)
One serious problem with any current large language model AI is that it “hallucinates”. It doesn’t truly understand what it’s writing. It has a sophisticated statistical map of the word most likely to follow the present one in the current context. It can be wildly wrong and still sound completely plausible and even logical. Here are some examples:
I asked the AI to give me detailed instructions about the best way to index and tag several years’ worth of notes. It provided me with very detailed step-by-step instructions to export, convert formats, store the files in a cloud server, etc. Several hours of work later, I reached the final step in this tortuous process - Mail the link for my OneDrive folder to the AI… (Think about this step for a second.)
It’s an AI! It doesn’t HAVE an email address! When I pointed this out to the AI, it apologized. The AI doesn’t understand an apology, but the response sounds quite good.
I’m taking an online Algebra class for fun using Khan Academy (online tutoring). I was happy about a difficult problem I had gotten correct but wanted to review the solution step-by-step using their AI tutor, Khanmigo. When I worked on the problem with the AI, it got the solutions to both sides of the equation wrong. It doesn’t have the answer key available, and it did the math incorrectly. The AI did apologize and acknowledge its error, but that mistake undermined my faith in the tutor’s ability to teach.
None of these things would be important if the AI was simply bad at what it does. I know that Copilot will often create weird images. For a person, it can create someone with three arms. For a man tossing a life ring, it shows the man standing on top of the water. To show a group of Marines getting a Humvee out of mud, it shows people pushing hard from both ends of the vehicle. I know to look out for those things.
The real problem is that it does a fine job most of the time. Need some story ideas? Easy. Want to process a bunch of material and reformat it? No problem. Searching for a particular phrase? Cake. Want a suggested story outline? Simple. Want to do an index of thousands of notes? Easy-Peasy… until you realize you’ve blown 4 hours following hallucinated instructions.
It’s like having a self-driving car that only glitches once in 10,000 miles. By the time it happens, you’re not closely monitoring the vehicle, you’re not asleep but you’re not expecting a catastrophe either. It takes you some time to react and, even if you’re wide awake and paying some attention, it’s going to create problems.
I’m already a cyborg by some definitions. I have hearing aids, glasses and artificial knees. I use digital technology extensively as my “Second Brain” and external memory. I’m an early adopter for new technology.
I’ll continue experimenting and learning with AI firmly in the mix. Why? Because it will so much better. Today’s AI is the worst version we’ll ever work with, and it can be stunning. Some of my posts will be written by me and the AI working together. Some of them, like this one, will rely on my analog skills.
That’s my perspective…