By now, most people have seen some form of AI crop up in their tools. The most obvious one is Google’s search engine, which provides results from its AI mode first in the list. You can go pretty far with that prompt, even asking for image creation, although that’s a terrible place to create images (full imaging tools aren’t really available in AI search engine mode).
In my case, I’ve used it for some research here and there, often against a framework I had in mind. More recently, I’ve had it helping me “test” some frameworks. I design a framework for something I’m building or writing, I outline it and paste the outline into AI, and ask it to challenge the framework from the perspective of say gender equity, under-represented groups, or literacy levels. Something more than a grammar check, something less than a full AI partner. When it’s done, I decide if I want to change anything in my approach.
But I’ve discovered some recurring oddities. Not necessarily bugs, just aspects of LLM-based tools that attempt to translate what I’ve said into something concrete.
Time Loops
About three months ago, I was testing Google’s tools to create an image. I eventually moved to ChatGPT to do the same. And both tools had the same problem.
I input a bunch of prompts. Created some sample images. Iterated a few things. All good. Then I told it to “tweak the image” in a certain way, and it said, “Okay, here you go.” But it was the same as the previous image. There was no “change” or iteration.
Okay, I thought, random glitch. Please regenerate the image with the following changes. Enter, whirr, ding. Same image. Huh?
I would then tell the AI that it gave me the same image again. Apologies, whirling indicator, bam! New image, same as the old. No matter what I did, it would not give me anything else.
It felt like a giant glitch. Or Groundhog Day. No matter what I did, same result. I couldn’t get out of the loop.
At the time, I had NO idea what was happening. Was it me? Was it the AI? Was it my browser?
I now realize it’s essentially a memory issue. Each chat in certain tools has an amount of “context” memory built into it. Once that’s full, loops start happening. Things bog down. In some tools, it will say, “Hey, I need to compact, okay?” and it will crunch your chat and go, “all ready!”. Except you have no control over what it ditched. Images perhaps? Instructions you definitely needed it to remember? Gone. In other tools, it compacts without even telling you.
The AI experts advise that where you had it generate a lot of “assets” (pictures, documents, etc.), it’s better to start your next phase with a clean prompt. You can cheat, though … if you ask an AI tool for a “handover” note, it will generate one you can prompt into the next chat, while it quietly fades into an ignored chat window. Waiting to see if you ever come back.
Google AI mode and ChatGPT seem terrible for this. I hit a lot of loop walls quickly. Gemini wasn’t so bad, but I think that was one of the ones that just compacted on its own. I actually prompted it a few times to save just to be safe. Claude, by contrast, doesn’t seem to have ANY of that happening. It hasn’t got stuck in a loop, and I haven’t seen it compressing/compacting/deleting anything yet.
PolyWogg 0, AI -25.
Technical support
One use case people recommend for AI is technical support. I’ve had four experiences using AI as technical support, and it has done a couple of things okay-to-well, and bombed on others.
The first bomb was on support in a program called mIRC. The IRC part of that is for Internet Relay Chat, and mIRC has been my go-to tool for online chatting since the late ’90s, when I used to be really into it. I have a couple of specific uses for it now, and I installed a couple of plugins recently to automate some stuff. Great, except they didn’t work QUITE the way I wanted, and the default display was in 9-point font. So, I asked ChatGPT how to tweak the mIRC settings for what I wanted.
One of the first things I told it was that I was using version 7.8.3. It has changed interfaces over the years, as well as command structures, so old commands won’t work; just like the voicemail messages say, “Please listen closely to the following options as our menu items have changed.” Okay, ChatGPT said, in its oh-so-confident way, that setting the display font to 16 points was super easy. It gave me a simple command, I entered it, and Bam! Error message. mIRC had no idea what that command was.
I told ChatGPT, it said, “Oh, right, sorry, yes, it’s done THIS way.” Another command, same error. “Oops, let’s do it through the menus, guaranteed to work. Click on DCC / Options / Display / Fonts”. Except there is no DISPLAY option under options. The menus have changed. Took me a while to find where fonts were. Made the change. No help really from chat, I just found the setting. Great.
Except no change. It would change the font for the chat window, but not the popup windows that I needed to tweak. Back to ChatGPT. Reminding it that I was in 7.8.3. Oops, it told me, the instructions were for version 4.3 or something archaic. What? Why? I specifically told you NOT to show me guesses, and to ONLY show me solutions that were validated for 7.8.3. It politely informed me that it hadn’t guessed; it had “INFERRED”.
And thus began my long descent into a deep rabbit hole with AI along for the ride, digging small tunnels ahead of me.
I knew the change could be done, that it wasn’t rocket science, and that I wouldn’t figure it out on my own. I knew just enough to know that either the default font or the plugin font was set too low. No other way for it to be wrong. I knew, therefore, Dr. Watson, that I could either fix the original setting, find a way to override the setting automatically, find a way to change it manually after the fact, or ignore it completely. As time wore on, that last option grew increasingly attractive.
To be fair, mIRC isn’t exactly a commercial application like Microsoft Word. It doesn’t have millions of users. And a user plugin within mIRC? That has even less information about it.
Yet each time I asked a question, the AI tool would say, “Oh, I know how to do that!” Except it never did. It couldn’t find where the default font was set, although I later figured out that it wouldn’t matter, it was the plugin font that was the problem. And it couldn’t figure out how to change fonts AT ALL. Nor could I. I opened EVERY file that came with the plugin. Lots of stuff for settings in the pop-up window, but nowhere where it had a font setting. It seems to be hardcoded in the plugin, alas.
I was undaunted. I knew that if I couldn’t do the first two options, I could at least set it after it loaded. Because I could go into the menu, choose Options / Preferences / Fonts / Font choice. Or something equivalent. It took about 5 clicks to get to where I wanted to change the font. But then if another window opened, I had to edit that one too — another 5 clicks.
None of the options AI suggested worked. Auto-load commands, mIRC scripts — none of them worked — and mostly ended up with the AI tool telling me, “Oh, it would have worked if you were using an older version.” WHICH I TOLD IT NOT TO DO! Grumble, grumble.
I found a workaround — I forced the font menu onto the taskbar manually and then told it to stay there forever; now when the pop-up shows up in 9-point font, I can click the taskbar, the menu opens, I change the font to 16 to 20 points, and it’s done. Super easy, two clicks.
PolyWogg 1, AI -25.
Drifting back to shore
This is a newer version of the loop problem. At least, it seems like it is the same sort of error.
I was trying to get Claude to do an image for me. I wanted to create a badge, with an embroidered edge. All of the AI tools take different approaches to images; some work in specific types of image scenarios, others in different scenarios, and others? Well, some don’t work at all.
Claude NAILED the first part of the badge problem. It gave me a perfect ring on the first try, which none of the other tools did (it uses SVG vectors to handle the geometry, hence why it was so accurate). But when it tried to do the embroidery, it failed completely. Nothing it did looked like embroidery.
I scrapped that idea, moved on. About 40 minutes later, out of nowhere, its attempts at embroidery showed up again in the margins. I was like, “Huh? Did I paste an old prompt?”. So I asked it why it included embroidery in that version. It told me because I asked for it earlier, and the algorithm forgot that I said no to it, so it went back and did it again. It had “drifted” back to the earlier setup. A little weird, so I had it add a prompt component that said very clearly, NO EMBROIDERY ELEMENTS. About 20 minutes later, working much further down in the model, the embroidery attempt came back. I checked the prompt; it clearly said no embroidery. So I asked again, “How?”.
This was a second type of drift. It had analyzed the prompt. And because I had asked for embroidery before (positive inclusion) and now was excluding it (negative inclusion), the fact that I had mentioned it at all was interpreted as positive inclusion. It ignored the “NO” part. I suddenly felt like I was working at Foreign Affairs back in the old days of TELEXes where you couldn’t afford for a word to be missed so you would type NO/NO to make sure one of the “NOs” made it through. I didn’t try that with Claude, because it was now a VERY long chat, Claude was getting on in digital minutes/years, and showing signs of confusion. I reset and started with a new chat, no mention of embroidery. It never showed up again.
I couldn’t find a way around it, other than using new chats. Not sure that’s a win.
PolyWogg 0, AI -2.
That’s the bad news. I was going to write about the tips it gave me for GIMP, but that’s a mixed bag, not all bad. And what really excites me is all the good things it’s done for me. That’s the next post. 🙂


