

“The Mandela Effect” is a term to describe a strange cultural phenomenon. Sometimes there is something which a big number of people seem to remember, only it apparently never happened, or it was different than we remember. The classic example is the “Berenstain Bears.” For a variety of reasons, many people in the Gen X and Millennial generations are convinced that they used to be the “Berenstein Bears” and find it hard to believe that Ma and Pa and the kids have always been “Berenstains.” Another classic example is the Sinbad movie “Shazaam,” which never existed, even though people “remember it.” Right now the Mandela Effect is mostly a fun conversational topic, but it’s about to get more common and probably more serious.
ChatGPT has been shaking up all kinds of circles. It may have “killed the college essay.” Chat GPT and AI tools like it may make thousands of coders unemployed. These AI tools can build bots and do art and have very strange conversations. Unfortunately, they also love to make stuff up… ChatGPT is pretty good at writing essays, but it often invents sources. The footnotes can’t be followed to real books and articles. ChatGPT recently made up a Mortimer Adler book and wouldn’t easily back down.
The Mandela Effect is kind of funny, but if AI tools gain more ground and continue to make things up… our information sources could become more unreliable. It’s not a stretch to say that people are, and will be, using AI tools to generate online content (and print content). If we think internet disinformation is a problem now, just wait until it will be harder to detect because it will have sometimes no seeming ideological bias to tip us off. It is also unlikely that all the AI-generated text we will see will always tell us that it is AI-generated.
There are many potential outcomes from this uncertain future. On the fanciful side, AI may not only invent non-existent books and additional episodes of favorite shows by referencing them—it may actually go ahead and make them. What if AI gets so good at replicating authors that we can have more Hemingway novels? What if it refuses to back down on having “invented” new Friends episodes, so it uploads them? Or certain false memories could become so strong as to be indistinguishable from the real. If we are flooded with soundtracks of concerts we never went to, will we start to reminisce about them? What if your “On this Day” photos are fake? If only Philip K. Dick was around to tell us what that might look like.
In Dune, there are references to the “Butlerian Jihad,” a crusade against computers and thinking machines. It marked a specific moment in the time of Frank Herbert’s fictional world. We may not end up with a Butlerian Jihad, but it’s quite possible that books printed before a certain date or old websites that haven’t been updated will suddenly be held in higher esteem. We know that AI wasn’t inventing sources in 1996, for example. Older editions of books may become more valuable for a new reason. Hard copies won’t just be picked for personal preference—they’re harder to “update” and rewrite. What you have is what you have. The 1999 version of a book won’t change on your shelf. It might on your Kindle. Long live old books.
Old, printed books may not be the only beneficiaries of source reassessment. Think about the peer review process. If ever it was needed, it’s now—down to checking the footnotes. Luckily, there are obsessives who thrive on that in academia. Non-academic periodicals are unlikely to adopt such laborious methods for fact-checking. Academic databases may get even pricier, as they will be hosting information that has passed through additional filters. Open access academic journals may also pivot to playing a different role in the internet ecosystem. They could represent more trusted sources.
As always, we have to wait and see. AI tools might get trained to make up less stuff. We might become better at discerning when they’re used and what they’ve done. Some government might use them to rewrite history. Some television show might scheme to get more royalties than they deserve. What will be the “stolen valor” occurrences with AI imagination? If sources are even less trustworthy, will we find ourselves even tighter in the grip of rhetoric? Or will we devise some kind of source-ranking system? Maybe you will just never run out of novels “by” your favorite author and it’ll be harder to know if you played high school basketball.
Are we all about to be androids dreaming of electric sheep and fictional childhoods? It’s too soon to say. But we do know that memories and past events shape how we see ourselves and our world. Believing in the “Berenstein Bears” isn’t a big deal, but there might be misremembered things that matter much more. At minimum, we need to be paying attention and probably writing a few things down by hand.
Leave a Reply
You must be logged in to post a comment.