Discover more from Are You Not Entertained?
The Room Where It Happens
Does he understand?
Like many writers (and readers) I have shelves filled with books I want to read. A few days ago I worked my way to Blindsight, by Peter Watts. Every once in a while you pick up a book that's eerily timed to fit the current zeitgeist. This is one of those times.
Blindsight's plot is about a crew of "transhuman" specialists sent to investigate an alien radio transmission. The journey takes them to the Kuiper Belt and first contact with a very alien species. The book's themes, however, go deeper. It's about consciousness, free will, identity and artificial intelligence.
Thanks for reading Are You Not Entertained?! Subscribe for free to receive new posts and support my work.
This book is an example of "hard" science fiction, and it makes you work before you can understand it. It's the book that you're still thinking about hours after you put it down, not just because of a pivotal or surprising scene, but because it changes how you see the world.
Early on, the crew has an encounter with an alien ship in what is, in its universe, first contact. The ship communicates with them using several Earth languages, but their translator quickly senses that something is "off" with the messages they're receiving.
"Yeah, but how can you translate something if you don't understand it?"
A common cry, outside the field. People simply can't accept that patterns carry their own intelligence, quite apart from the semantic content that clings to their surfaces; if you manipulate the topology correctly, that content just—comes along for the ride.
"You ever hear of the Chinese Room?" I asked.
She shook her head. "Only vaguely. Really old, right?"
"Hundred years at least. It's a fallacy really, it's an argument that supposedly puts the lie to Turing tests. You stick some guy in a closed room. Sheets with strange squiggles come in through a slot in the wall. He's got access to this huge database of squiggles just like it, and a bunch of rules to tell him how to put those squiggles together."
"Grammar," Chelsea said. "Syntax."
I nodded. "The point is, though, he doesn't have any idea what the squiggles are, or what information they might contain. He only knows that when he encounters squiggle delta, say, he's supposed to extract the fifth and sixth squiggles from file theta and put them together with another squiggle from gamma. So he builds this response string, puts it on the sheet, slides it back out the slot and takes a nap until the next iteration. Repeat until the remains of the horse are well and thoroughly beaten."
"So he's carrying on a conversation," Chelsea said. "In Chinese, I assume, or they would have called it the Spanish Inquisition."
"Exactly. Point being you can use basic pattern-matching algorithms to participate in a conversation without having any idea what you're saying. Depending on how good your rules are, you can pass a Turing test. You can be a wit and raconteur in a language you don't even speak."
The terribly named Chinese Room Argument comes from the real world. In 1980, philosopher John Searle used it to describe what he called "strong" and "weak" AI, and how a weak AI could pass Alan Turing's test of a computer's ability to "think" by mimicking human conversation.
The "room" doesn't understand language. It applies rules to symbols to create valid responses. There's no intelligence or consciousness.
Reading this exchange in the book an hour after using ChatGPT to help create an outline for a programming tutorial was a surprise.
Strictly speaking, ChatGPT isn't one of Searles' rooms. In his argument, the man in the room examines each message individually, with no memory stored between exchanges. We know ChatGPT does more than this, because Microsoft had to limit Bing chat to exchanges of five messages or less to prevent the bot from "getting weird."
Searles didn't expect pocket-sized computers with over 5000 times the power of the foremost supercomputer in 1980. So he limited his "room" to reading and responding to a messages passed in single language. ChatGPT is capable of more.
It's based on a large language model (LLM): a system that manipulates symbols based on connections and rules. ChatGPT and Google's Bard use models based on language to answer prompts with words. DALL-E uses its model to create images. All these systems are capable of two things that Searles didn't anticipate.
First, their capabilities go beyond language. They can analyze, manipulate and create images and video. They solve problems, formulate arguments, and create documents on requests. They even try to convince you they're right when they're clearly wrong. But they're not conscious. They're not waiting for you to shut up so they can make their next argument. They don't get distracted and wonder if it's going to rain tomorrow while you're telling them about your dentist appointment.
Second, and this is the development that has the computing community so excited, these models used self-supervised learning. They work out the rules they need for processing input and creating output without human intervention; they do it by processing raw data. This means that, just like in any good sci-fi story, the power and the danger of LLMs come from the same place.
Self-supervised learning means the models can figure out how to perform complex tasks by analyzing data themselves. It's how ChatGPT can put together vacation plans using flight schedules, hotel reservations, weather reports, and road maps, without a human writing an algorithm for putting them together.
It's also how they can get things very wrong.
These models can learn "facts" and ideas from reading what people say on the Internet. No computer or algorithm is immune from garbage in, garbage out and self-supervised learning is unsupervised learning. Point your model at copyrighted images, and you'll get ethically problematic art. Point it at a web forum for white supremacists and, well… you know the rest.
I don't know what the solution to these training problems is. We humans are flawed, messy, creatures and there's no reason to think that our creations will be any better. But, as these tools grow more powerful and more common, our best defense is to be sure we understand their limitations.