




I recently heard about a new thing called AI Hallucinations. I am both disturbed and envious. This concept disturbs me because artificial intelligence technology has taken off faster than one of Jeff Bezo’s rockets. And this AI hallucination thing suggests that AIs might be prone to making mistakes. I’m not happy that the AI that controls my self-driving car or remotely controlled house might start hallucinating. Which is where the envy comes in. Back in the 1960’s a human needed to risk arrest by using illegal drugs to hallucinate. Apparently, an AI just needs to find some bad data on the internet, a plentiful commodity to be sure. AI Hallucinations
Allow me to explain, before the AIs take over completely, at which point an AI would be explaining this. Which would be a clear conflict of interest, not to mention probably incorrect. Wow, I’m getting a headache. Sorry about that. Anyhow, apparently there’s this thing called an AI hallucination. You ask your AI, let’s call her ‘Dufelda’ for lack of a better name, a question. For example, perhaps you want to know if a hippopotamus can fly. AI Hallucinations.
You ask, “Dufelda, can a hippopotamus fly?” Dufelda the AI does a deep dive into the internet and collects all the relevant websites related to that specific question. She then reads it all, formulates an answer, and provides it for you in written form in a couple of seconds. Assuming she stumbles over some bad data and starts to hallucinate, she might come back with the answer “Yes, a hippopotamus can fly.” Or perhaps, “What’s a hippopotamus?” Or even “Some hippopotami live in trees.” Back in the 1960’s I’m pretty sure there were a lot of hippies that saw flying hippopotami, thanks to things like LSD, magic mushrooms, and other such nasty little chemical hallucinogens. AI Hallucinations
I recently read somewhere, perhaps on the internet or social media, or the mainstream media, that one AI reported George Washington, that guy that never lied and chopped down a cherry tree, was an African American. A second article reported that another AI, who shall remain nameless, summarized information on former President George Bush, referring to him as former President George ‘Shrub’. That particular AI had better stay out of Texas. These are just a couple of recent examples I’ve read regarding this AI hallucination thing. AI Hallucinations
What exactly is this AI Hallucination phenomenon, you ask? Well, I’m not entirely sure. If I look it up on the internet, an AI provides the following summary: “An AI hallucination occurs when generative AI models produce convincing but false, nonsensical, or fabricated information, such as fake facts, sources, or code, which they present as if it were accurate, often due to limitations in training data or design, creating risks for misinformation and requiring human verification.” There are a number of reasons to explain this phenomenon. AI Hallucinations
There are apparently some bad data out there on the internet. It might be old data that has never been removed, or incorrect data. In some cases, there might be no information at all on a given subject. In these scenarios, an AI apparently hallucinates and does either one of two things. In the case of incorrect information, the AI has no way of knowing the information is incorrect, so if formulates an answer based on false information. If there is no information on a given subject, the AI apparently just makes something up. AI Hallucinations
A friend of mine told me that he recently did an internet search on a particular subject within his area of medical expertise, and he was sure that the AI answer was incorrect. When he asked the AI why it lied to him, it responded that it wanted to please him by providing an answer, but there was no information. Therefore, it made up an answer to make him happy. Yikes! So now when he does an internet search, he includes the directions that the AI should not tell a lie. Sounds like that George Washington thing again (“I cannot tell a lie, I did it” for those of you who skipped school the day of that history lesson). AI Hallucinations
A recent article makes this even more disturbing. According to a couple of sources, at least one internet search engine is planning on adding more ads to your search response while deleting the URLs to the actual relevant web posts. There is a move to depend solely on the AI summary. Let’s see, the internet already contains a lot of incorrect information, or in some cases no information. So, we’re going to stop seeing the individual internet sites relevant to our search query and be forced to depend on the summary of an AI, which has no ability to judge whether information is truthful and accurate, and wants to please us by providing an answer, any answer. My headache just got worse. In fact, I think I’m starting to hallucinate. Look at all the pretty colors. AI Hallucinations
Society appears to be headed towards AIs in control of completely autonomous self-driving cars, controlling your entire home environment, and perhaps even controlling self-flying airplanes.Not to mention that AI at the other end of the customer service call. We may be headed back to the stone age. AI Hallucinations
I buy myself a brand new fully self-driving car, whose AI recently read a web post from some car magazine stating that this particular car is a piece of crap. So, the AI hallucinates, decides the car is not safe, and in order to protect me and my family it explodes the car while it’s parked in my garage so we can’t drive it. AI Hallucinations
Or perhaps the AI controlling the thermostat in my house stumbles on an article on the web indicating that a gas furnace, same model as mine, leaked carbon monoxide fumes because it was installed improperly. I happen to live in northern Minnesota, it’s January, and my AI hallucinates and decides to refuse to run my furnace because it might be a hazard. I’m gonna need lots of blankets. AI Hallucinations
Or the AI on the other end of my customer service call to the electric company decides that my call really isn’t “all that important”. It does an internet search on my zip code, decides that zip code uses too much power, and shuts down the electricity for that entire area to protect me from global warming. AI Hallucinations
In other words, hallucinating AIs sound kind of terrifying. If there’s no information on the subject I ask about, the AI just makes something up so it can provide an answer to please me. And God forbid, the AI might hallucinate over incorrect information and still act to please or protect me. So, the premise of the movie The Terminator is wrong. AIs don’t end up wanting to destroy us because we are pests. They just hallucinate and try to please us, to death. AI Hallucinations
If you liked this blog post, you might also enjoy one of my comedy mysteries. I promise they won’t make you hallucinate, although they will make you laugh. To find out more and get links to the books, go to https://johnjjessop.com/books-humorous.
