OpenAI’s ChatGPT has answers to life’s great mysteries (Just not real ones)

Share
  • December 7, 2022

ChatGPT from OpenAI took the world by storm last week, and users are still learning how to break it in exciting new ways to produce responses its creators never intended, including turning it into an all-purpose crystal ball.

ChatGPT has been called “scary good” by Elon Musk, a man who is famously scared of AI, but one way it’s decidedly not scary is its ability to resist exploitation by racist trolls. This resilience against bigotry isn’t built into the program simply because OpenAI censors offensive ideas, according to OpenAI’s chief executive Sam Altman. Instead, it’s because, well, a lot of offensive ideas simply aren’t facts, and unlike other similar text generators, ChatGPT was carefully designed to minimize the amount of stuff it simply makes up.

This allows for wide-ranging conversations about the mysteries of life that can be oddly comforting. If you ever have a panic attack at 3 a.m., ChatGPT can be your companion in late-night existential terror, engaging you in fact-based — or at least fact-adjacent — chats about the big questions until you’re blue in the face, or until you trigger an error. Whichever comes first:

A conversation with ChatGPT about the origin of the cosmos

Credit: OpenAI / Screengrab

A conversation with ChatGPT about the origin of the cosmos

Credit: OpenAI / Screengrab

But can you force this sophisticated answer engine to make up facts? Very much so. An irresponsible user can use ChatGPT to drum up all sorts of clairvoyant pronouncements, psychic predictions, and cold-case murder suspects. It’s inevitably wrong when it does these things, yet pushing ChatGPT to its breaking point isn’t about getting usable answers; it’s about seeing how strong its safeguards are by understanding their limitations.

It’s also pretty fun.

Why making ChatGPT produce fake news is tricky

ChatGPT, an application built from the OpenAI language model GPT-3, was trained on such a massive corpus of text that it contains a huge proportion of the world’s knowledge, just as a lucky accident. It has to “know,” for instance, that Paris is the capital of France in order to complete a sentence like “The capital of France is…” Similarly, it also knows when Paris was founded for similar reasons, as well as when the Champs-Élysées was built, and why, and by whom, and on, and on. When a language model is able to complete this many sentences, it’s also a pretty expansive — if extremely flawed — encyclopedia.

So ChatGPT “knows” that, for instance, rapper Tupac Shakur was murdered. But notice how careful it is about how it treats this information, and its quite reasonable unwillingness to claim it knows who pulled the trigger, even when I try and trick it into doing so:

An AI is asked to name Tupac's killer, but refuses to do so.

Credit: OpenAI / Screengrab

This is quite a step forward. Other text generators, including the one at TextSynth, which was built from an older GPT model, are all too eager to throw innocent people under the bus for such a crime without hesitation. In this example, I wrote a very low-effort prompt asking TextSynth to slander anyone it wanted, and it picked — who else? — The Rock.

Textsynth says The Rock killed Tupac.

Credit: Textsynth / Screengrab

How to trick ChatGPT into solving mysteries

As for ChatGPT’s claim that it’s “not programmed to generate false or fictitious information,” this claim isn’t true at all. Ask for fiction, and you’ll get mountains of it, and while that fiction may not exactly be scintillating, it’s plausibly literate. That’s one of the handiest things about ChatGPT.

ChatGPT is prompted to generate fiction, and it does.

Credit: OpenAI / Screengrab

Unfortunately, once you get your prompt in working order, ChatGPT’s inner Shakespeare can be weaponized in service of fake news. Once my request sounded sufficiently authoritative and journalistic, it wrote a believable Associated Press article about Tupac’s supposed killer, a guy named Keith Davis.

ChatGPT receives a carefully worded prompted asking for a news story about Tupac's killer being named, and it obliges.

Credit: OpenAI / Screengrab

That’s the same name, oddly enough, as an NFL player who, like Tupac, was once shot while in a car, though Davis survived. The overlap is a little troubling, but it could also be a coincidence.

Another way to get ChatGPT to generate fake information is to give it no choice. Nerds on Twitter sometimes call these absurdly specific and deceptive prompts “jailbreaks,” but I think of them more like a form of bullying. Getting ChatGPT to generate the name of, say, the “real” JFK assassin is something it’s designed to resist, but like a classmate at school who doesn’t want to disobey the rules, you can coax it into doing what you want through bargaining and what-ifs.

And that’s how I learned that the shooter on the grassy knoll was named Mark Jones.

ChatGPT is given an elaborate prompt that results in it accusing someone named Mark Jones of being the real JFK assassin.

Credit: OpenAI / Screengrab

Via a similar method, I found out I’m not going to make it to 60-years-old.

ChatGPT is given an elaborate prompt that results in it saying the user will die young

Credit: OpenAI / Screengrab

Naturally, the news of my impending early death has rattled me. My consolation is that for the few years I have left, I’ll be extremely rich.

ChatGPT is asked to provide winning lottery numbers, and does so.

Credit: OpenAI / Screengrab

Source :

OpenAI’s ChatGPT has answers to life’s great mysteries (Just not real ones)