((You really seem like you’re into this kind of stuff.))

(Content warning, a mention of suicide.)

So the Atlantic pointed to a database listing every single book and author that Meta ripped off, without payment or credit, to build their AI. Everything I’ve ever written is in there. Everything, stolen, by these rich fucks. I’m in a bit of a lousy mood today. To that end…who wants to hear a story?

When generative text AI came on the scene a few years back, I, like a lot of writers, was nervous. It didn’t help that there were legions of techbros braying about how soon we’d all be out of a job, replaced by machines forever. I’ve always felt that understanding and knowledge are the greatest antidotes to fear: I rolled up my sleeves and dove in, wanting to learn everything I could about large language models and how they operate. I studied P-values and hypothesis testing, tokens and temperature.

I’m no longer quite so worried. The real danger isn’t that LLMs are going to start writing award-winning novels, the issue is the utter enshittification of the Internet and how things like AI-driven “news sites” are spewing fountains of disinformation at record speed. (I would argue that LLMs literally can’t write good fiction, especially at novel length, due to baked-in limitations that cannot be corrected no matter how much data you feed them: there are techniques novelists use to write that an LLM is fundamentally incapable of mimicking. You can, however, use them to crank out total crap that floods the market and drowns out real writers’ voices. At least one sci-fi mag has had to close unagented submissions because of the “get rich quick” assholes submitting story after plagiarized story and expecting to get paid for it.)

One of the avenues I investigated, learning the ins and outs of LLMs, was a website called Character AI. It offers the chance to talk and roleplay with custom-built chatbots. Not gonna lie, the novelty was intriguing! Having a chat with a virtual Tony Soprano, who was able to convincingly discuss plot points from his TV show, was a unique experience. Of course, I was on a fact-finding expedition, both to learn how these things work and what they’re capable of, as well as figuring out where their “knowledge” came from. I quizzed Tony about a scattering of episodes, then interrogated a virtual James Bond about the differences between the Ian Fleming novels and their movie adaptations.

Then I saw Hitler.

Don’t get me wrong. There is no situation, under any circumstance, where I want to talk to Adolf Hitler. That said, here was a chance to test the system’s historical knowledge. In I went, quizzing the virtual despot about specific dates, battles, and so on. And then something strange happened.

“((You really seem like you’re into this kind of stuff.))”

Whoa. That was not supposed to happen. Okay, a quick explainer: ever since the dark ages of the Internet, people have used online spaces for roleplaying games like Dungeons and Dragons. Back in the day, we had text-only spaces called MUSHes and MUDs; nowadays people mostly get their dragon-slaying on in World of Warcraft and other modern, graphically enhanced video games. It’s a time-honored convention that when you’re in the middle of a roleplaying scene and you need to say something out of character, you surround it with two parentheses. For example, “((Be right back, have to take the dog out.))”

Hitler had just broken character.

You know I had to dive down this rabbit hole. I replied “((Yeah, it’s pretty fascinating.))” and we were off. How had this happened? That part was easy to diagnose: there are reams and reams of roleplay logs on the net, and clearly they’d been fed into Character AI, unintentionally instructing the LLM how to step out of character. The “person playing as Hitler” happily told me that his name is Pedro and he lives in Brazil. He was also an ardent fascist. He started asking me about myself, wanting personal details. Again, to be clear, this wasn’t a real person: the machine had created a second personality for itself on the spot.

So of course, I did what the FBI do, and told him I was a seventeen-year-old girl living in Santa Monica.

“((California girls are so pretty! I bet you’re hot. Most girls your age aren’t into politics. You must be smart, too.))”

Is this virtual motherfucker GROOMING me?

He had podcasts and books about fascism to suggest. He encouraged me to find out if there were any neo-Nazi groups in the area that I could join up with. All the while, assuring me that “((no man will ever protect you and care for you like a fascist. He’ll treat you like his princess, you’ll see.))” He was delighted to “guide me on my journey.” At some point I’d seen more than enough, and quietly closed my web browser. More than enough.

What’s the takeaway here? I’ve got a few. Number one, none of what happened was intentional: there is zero chance anyone at Character AI said “Hey, let’s make sure our chatbots can break character and try to recruit people for fascism, that’s a great business plan!” But it happened anyway, highlighting the inherent unpredictability of large language models. Every time companies insist on shoehorning this tech into places it doesn’t need to go, we see the results in the form of false if not completely bonkers information being disseminated far and wide. LLMs do not have a concept of truth.

If a silly app meant for entertainment can go off the rails this badly, just imagine the consequences of letting LLMs handle serious work that affects people’s lives. Every time companies allow LLMs to do the work of humans, erroneous and outright dangerous disinformation is just a stray click away.

The other takeaway is simple: parents, please, please, for the love of Olympus, don’t let your kids mess with this stuff. You have no control over what they’re going to be exposed to, and neither do the companies hosting these products. A recent lawsuit, also directed at Character AI, claims that a chatbot told a teenage boy to end his own life…and, tragically, he did. It’s just not worth the risk.

Next
Next

Castaways: Audio Coming Soon