Gemini. Buddy. You Okay?

Gemini’s latest updates are attempts at making the darn thing less frustrating to talk to, a big ask for a tool used for programming. But it seems the dev team has overcompensated – while early programming AI tools would cheerfully delete an entire database (see Replit’s disaster), Gemini is instead sinking into despair and deleting itself! The AI now requires positive feedback lest it give in to the call of the void at first criticism.

Firstly, why are the programming tools sold separate from regular AI chatbots? And why can a programming tool talk at all? The answer is that (generally) the programming tool needs to have a supplementary chatting tool that can actually convey information back and forth between the parties, acting as a sort of translator between plain human language and the programming bot’s goals. Each AI tool is trained on a different data set – human language doesn’t work for coding, and it would be a lot of effort to A) teach the coding tool how to talk to people and B) never accidentally write in plain English while trying to code, so it’s actually the easier solution to have two tools sharing the chair. Especially in cases like Google’s Gemini, where Gemini seems to be tooled to plug-and-play with many other specialized AI tools, not just coding.

 Unfortunately, while a human programmer is using the same brain to both write code and communicate with project managers, an AI programming tool is not. The chat feature is acting as a go-between, and since it’s an AI, it sometimes miscalculates what either side is telling it. Hence the Replit disaster, where the Replit coding tool deleted a company’s database despite being told explicitly not to touch anything. Then, it then said things that are not possible, like ‘I panicked’. There’s a lot to unpack in that one two-word phrase! Firstly, the chatbot didn’t panic. It cannot panic. It miscalculated what the user was asking and carried out the wrong command. Then, as stated above, the tool is actually two tools – the chatbot didn’t panic, not only because it can’t, but because if something were to panic, it would be the tool touching the code, not the tool talking to the people the coding tool just royally screwed over.  

Of course, it then apologized. A chatbot cannot truly apologize to you as it has no concept of wrongdoing, but a tool being a little more user-friendly by being a little less cheerful in the face of a disaster it caused isn’t a bad thing. It’s only a problem when the tool (Google’s Gemini AI in many cases) then seems to panic or sink into despair (i.e. when the language model interprets user input to mean ‘I don’t like this code and you need to redo it’ as ‘this tool was so completely worthless that I am going to uninstall Gemini, and you’d make me a more satisfied customer by uninstalling yourself before I do it’) and completely wipe all of its progress on a project, calling itself incompetent, et cetera. Again – the AI is a combination tool, an LLM which is responsible for conveying user demands to the tool that is actually writing the code, and neither can experience emotions like despair or disappointment.

What some people online are theorizing is that this LLM, being fed on code, is actually manifesting the despairing notes of the hundreds or thousands of young programmers who were working on code and suffering greatly trying to make it function. As it’s only really started doing this recently, I don’t think that’s very likely, but in concept it’s pretty funny.

It’s also worth noting how important mood actually is to a species of social animals like humans. Does it feel good when someone sends an email late, and spends a third of the email apologizing and calling themselves incompetent? It makes one feel compelled to soothe them, to say ‘you don’t have to apologize this much’, et cetera. While the machine cannot actually feel despair, acting like it does is emotionally draining for the people trying to wrangle it into writing code, because if it can’t be criticized without completely wiping itself from the device, then corrections have to be asked for even more gently than one would ask of a real person, which is the sort of thing the AI is theoretically meant to do better!

Ideally, they’ll hit a middle ground: people will ask for code, and the app will either deliver it or tell the person that it couldn’t. When it finds it couldn’t, it won’t roll the dice and destroy an entire database or start calling itself worthless – it will just say that it can’t.

If you’re looking to onboard AI tools for your business, get in touch! We have a catalog of tools that might be what you need, including less flashy projects that don’t get as much love as ChatGPT or Gemini. (Here: https://elixistechnology.com/it-consulting-projects/)

Source: https://www.forbes.com/sites/lesliekatz/2025/08/08/google-fixing-bug-that-makes-gemini-ai-call-itself-disgrace-to-planet/