Misinformation in Historical Context: Lessons for the AI Age
During the tumultuous years of the First World War, the British government sought innovative methods to assist citizens in maximizing their meager food rations. In its quest, officials stumbled upon pamphlets credited to a prominent 19th-century herbalist, advocating for the consumption of rhubarb leaves alongside the stalks.
Prompted by these findings, government-produced pamphlets encouraged the public to embrace rhubarb leaves as a salad ingredient, rather than discarding them. However, a critical oversight emerged: rhubarb leaves are toxic. Consequently, there were reports of illness and fatalities.
Although the government swiftly rectified its guidance and withdrew the pamphlets, a similar plight unfolded during the Second World War.
Once again, officials revisited a repository of outdated resources, including those advocating the consumption of rhubarb leaves. Deemed an efficient solution, these materials were re-released, only to lead to yet another wave of sickness and unfortunate incidents.
These pamphlets exemplified misinformation, yet the public had no reason to doubt their authenticity, as they emanated from trusted government sources.
This scenario underscores a significant lesson about the persistent dangers of misinformation, which, even post-correction, can retain its perilous grip. This narrative serves as a poignant reminder, particularly in the era of generative artificial intelligence (AI).
Distinctions Between Chatbots and Search Engines
Generative AI technologies, such as ChatGPT, are engineered to generate text and images from the data they process. However, they also possess the capacity to disseminate misinformation at an alarming pace, outstripping the speed at which accurate information is produced and corrected.
As previously illustrated, correcting errors does not necessarily eradicate the initial misinformation.
Unlike conventional search engines, AI platforms like ChatGPT and Claude do not function in the same manner.
Despite their design, users frequently replace search engines with these AI tools, drawn by their ability to quickly summarize intricate topics with fewer clicks than traditional internet searches.
Search engines consolidate articles and data on specific subjects, assessing their reliability. In contrast, generative AI relies solely on immense collections of text, analyzing the probability of word adjacency. These “large language models” prioritize generating seemingly coherent sentences over ensuring factual accuracy.
For instance, if the phrase “green eggs and ham” appears frequently in its corpus, its probability model will likely produce a result associating “eggs and ham” with the color green.
Plausible Yet Misleading Assertions
OpenAI, the progenitor of ChatGPT, has acknowledged that the architecture of generative AI inherently allows for the occasional propagation of falsehoods.
Their research elucidated that large language models “hallucinate,” akin to students conjuring guesses in the face of challenging exam questions: they generate plausible yet erroneous statements rather than acknowledging uncertainty.
Such inaccuracies carry tangible repercussions. A recent study revealed that ChatGPT failed to identify medical emergencies over half the time. This issue is exacerbated by existing inaccuracies in medical records; a UK inquiry in 2025 estimated that erroneous records affected nearly one in four patients.
While a physician may order further tests to confirm a diagnosis, one expert observed that generative AI “delivers incorrect answers with the same assurance as correct ones.”
The crux of the matter, as another scientist aptly articulated, is that generative AI “detects and mimics linguistic patterns.” Accuracy is secondary to its primary objective: constructing sentences.
Research indicates that generative AI tools misrepresent news content 45% of the time, regardless of geographic location or language.
This alarming trend raises serious concerns regarding AI’s role in potentially endangering lives, particularly through the generation of fictitious hiking routes.
Humor often accompanies the shortcomings of generative AI, such as its suggestions to consume rocks or utilize glue to adhere pizza toppings.
Yet, more ominous examples persist; consider the meal planner that recommended a recipe capable of producing chlorine gas or dietary advice that led one individual to suffer from chronic bromide toxicity.
Prioritize Reliable Historical Information
Enhancing education concerning the prudent use of generative AI will be paramount as it permeates governmental and organizational spheres.
Politicians are already incorporating generative AI into their daily operations, particularly for policy research. Furthermore, hospital emergency departments are utilizing AI technologies to streamline patient note-taking.
One practical safeguard is to seek verified information produced prior to the influx of AI-influenced text and imagery on the internet.
Several resources exist to facilitate this process, such as a tool developed by Australian artist Tega Brain, which filters content created before ChatGPT’s debut on November 30, 2022.

If your instinct leads you to verify the narrative introduced at the beginning of this article, diving into traditional literature may serve you best.
References to instances of governmental advisories promoting rhubarb toxicity can be uncovered in sources such as The Poison Garden’s A-Z of Poisonous Plants and Botanical Curses and Poisons: The Shadow Lives of Plants.
Source link: Rnz.co.nz.






