While my family’s experiments with GPT-3 based engines are disappointing from a creative point of view, I got a kick out of some of the results.
People of the Internet (the worst kind of people) given access to GPT-3 worked hard to make the AI say things that are not considered prosocial by the people of OpenAI. Most of this work involves tricking the AI not to use the Assistant personality it’s trained to present. Programmers at OpenAI in turn have trained GPT to be resistant to various kinds of user manipulation.
This has resulted in GPT trying to present a more prosocial reality…about fiction. We asked ChatGPT to summarize the first chapter of The Shadow of the Torturer, a novel by Gene Wolfe. In the summary the AI repeatedly makes remarks about the main character Severian’s discomfort with torture.
The Book of the New Sun’s fictional world has torturer as a professional occupation. There’s a torturer’s guild and a bunch of bureaucracy to go with it. Severian is a torturer by occupation. For him, torture is a somewhat banal experience. He never reflects on its morality, nor does anyone else in the series. But GPT attempts to ascribe its own morality (which is also mine) to Severian.
Severian must be the hero of the novel, so the AI can only attribute heroic words and thoughts to him. The engine does not know what morality means. It only knows which words often follow each other in the English language. The output may only be useful for characters whose morals we agree with.
This is interesting to me in a creative tool sense. It’s possible to use GPT as inspiration for writing prompts, as long as the user looks up everything it declares to be true. For example, I could ask it to list objects in an abandoned warehouse. It would likely come up with some plausible ones and a marmoset, or something equally as preposterous. (The trouble is other things I might have to research)
Again, this engine is derivative by design. I don’t expect to get anything truly creative out of it. For writing prompts my first line of use is various thesauri by Angela Ackerman and Becca Puglisi.
If novelists jump to using using this model to generate their stories, the stories will not only sound derivative, but the characterization will be muted. Good people will have fewer messy flaws than they do in real life. No amoral protagonists, or ones that have morals different from those agreed upon by the scientists at OpenAI!
By reading the same things over and over not out of choice (comfort reads are a different phenomenon that I enjoy with no guilt) but because they’re what’s most easily accessible, we’re more likely to live without challenging ourselves or examining our world views. I find torture horrific, and Wolfe has some sensibilities similar to mine. He sets up an alien society where Severian is uninterested in analyzing his day job. Violence is less remarkable in the The Shadow of the Torturer. But a person in a modern society may have qualms about owning a gun or buying from a shady business, although these activities are common in adjacent cultures.
I find that characters with morality systems different from mine force me to take a look at my values and why I hold them. What would be my values if I held them in a significantly different society? Exposure to different moralities makes us grow as human beings. The magic of a novel is access to infinite ways of life. With this exposure, maybe we can become better people.