A Writer’s Nightmare
Suddenly, it’s upon us.
We stand unprepared, all thumbs and fumbles, wondering what to do. Like always.
It’s AI, artificial intelligence, and it’s coming to get you. Elon Musk said recently that artificial intelligence “has the potential of civilizational destruction.” Never mind that he might have had selfish reasons for saying that. You’ve got to consider the possibility that he told the truth.
We’ve known for decades that someday machines would learn to think. I once read a futuristic short story in Playboy magazine, back in the days when we all read Playboy for the articles (nod, nod, wink, wink). In this story, a worldwide group of computers had been connected with one another. Their pooled intellect gave birth to an über entity which took over the world, cutting biological humans out of the picture.
I don’t remember the story’s title, or who wrote it, but it appeared in the 1960s.
So we can’t say we didn’t see this coming.
But now, it’s here, arriving on our doorstep last week, in a COVID-style sneak attack. Remember March 12, 2020, when they canceled March Madness?
It now seems the second full week of April 2023 will be remembered as When AI Became a Serious Matter.
People—not scientists necessarily, but the kind of dreamy folks I hang out with, i.e., writers and philosophers—now report playing around with something called ChatGPT. In their playing around, they have discovered that ChatGPT can write prose that seems remotely like something a human being might have written.
Let me assure you, Dear Reader, I am not among those who have played with ChatGPT. I would not have been among those playing with nuclear fission in 1942, either.
But I gather, from what those others are saying: The problem is not that one cannot detect the output is computer-generated. Rather, it is that for the first time one can imagine that in the future—for example, later this afternoon—ChatGPT or a similar program will get good enough that one can no longer tell the difference.
Never mind the potential end of world civilization; AI could impact writers.
This is serious.
I took part in a recent colloquium of concerned authors. Scores, maybe hundreds, of authors attended. Many of these authors, unlike Your New Favorite Writer, earn all or part of their income by writing.
I started out puzzled but eventually caught the drift. Or, rather, drifts.
- 1. AI programs will soon start writing better than we can . . . or almost as well as we can . . . and we’ll all go broke. Publishers will no longer need human authors. They’ll just push a button on a machine and get a book, ready for release. No royalties will be owed to any humans. The AI machines, we assume, will accept their wages in electricity and silicon.
- 2. Since AI is already quite useful at discharging the kinds of tediosities with which we writers are burdened, some of us hunger to use AI programs or bots—or whatever they are—in our own writing practice. Not for writing, you understand. Oh no, never!! Rather, we would use them as slave labor for menial tasks—up to and perhaps including the elaboration of trial texts which the writer may then modify. Thus we would reserve the higher functions of authorship for us who so richly deserve royalty checks. This would be, I suppose, something like the way research professors use graduate students. In this sense, AI is merely a tool, like a dictionary or thesaurus, clearly beneath the mystical heights of creative writing.
But you’ve already seen, have you not?—It’s so hard to get ahead of you, Astute Reader—how in some vague way this second concern is already at odds with the first?
- 3. So, to reconcile the first and second concerns, some participants declared we authors ought to embrace AI to whatever extent is warranted, so long as the publishers are required to attribute each published work to a specific human author—one who gets paid.
It would take at least an act of Congress to make this happen. But a squadron of intellectual property lawyers is already cranking up its engines to bombard publishing contracts and book copyright pages with model provisions—“guardrails,” they call them—protecting authors’ rights to control the use of their product, and importantly, to get paid.
This drafting of guardrail language, all by itself, strikes me as a Herculean task. But it may be necessary work, because as you know, what people can do, they will do.
There is no holding back this tide of AI. It’s a giant wave, and we shall either find a way to shoot the curl or be crushed in the collapsing pipeline.
- 4. Beside the three areas of concern already mentioned, there is another. It seems existing AI platforms already incorporate the output of human authors as a training aid. The chatbots are learning to write better because their creators feed them sample text from published books that are the copyrighted property of authors—without acknowledgment or compensation, as far as I can tell.
It may be possible, through legal action by authors’ groups, to prohibit the use of authors’ works as training materials for AI bots, or even to claw back some form of payment for the unauthorized uses that have already occurred.
It’s yet another messy area for the lawyers to sort out. But I take it as a mere corollary of two more basic questions:
1. Who’s going to do the writing, people or machines? And,
2. Who’s going to reap the benefits, authors or publishers?
Pardon my chutzpah, Gracious Reader, but I suspect we’re still missing the Big Picture.
It’s all about the power of narrative. I once heard a lecturer say that when it’s time for bed, kids ask for stories. They don’t say, “Mommy, please read me the telephone book” or, “Daddy, I want to hear a grocery list.” They say, “Read me a story” or even, “Tell me a story.”
Authors are storytellers. Even technical writers owe their jobs to the particular skill of stating scientific or technical facts in a way that allows readers to understand those facts as a sequence of events including a chain of causation. In other words, they tell stories, no less than novelists or playwrights do. What’s true for technical writers is even more obviously true for freelance journalists and for the authors of narrative nonfiction books.
So, if we storytellers are afraid that publishers will gain access to computer programs that allow them to cut us out of the profits, ought not the publishers worry that they, likewise, will be cut out of the profits?
If AI can write as well as human writers, will there not come a day (perhaps next Tuesday) when you can pull a cell phone from your pocket and command: “Tell me a story.” And the phone will make up a story on the spot and either speak it or type it to you. Maybe it will even assemble a complex dramatic video for your entertainment.
And—get this, Dear Reader— the product will be first-rate. It won’t be merely grammatical. It will be tense and compelling. Maybe not hilarious—humor is notoriously difficult, and it may exceed the capability of machines to learn. But they’ll be able to assemble great suspense and action films. They’ll do it all by themselves. You can order up an original story at the touch of a thumb.
Oscar Wilde is supposed to have said, upon viewing a flawless forgery of an art masterpiece, “It has all the virtues of the original, except originality.” The same may be said of these future machine-generated stories. Readers, however, will still gobble them up.
BUT SO WHAT? What difference does all this make?
If we can get stories concocted instantly by our phones, what impact will that have on literature? Surely it will mean authors and publishers as we know them will no longer exist—at least, not in any traditional framework.
Yet story will survive.
If we look to our phones for stories, that’s only because story is a basic requirement of the human race. You might even say the capacity for story is what makes us human.
One can easily imagine machines telling stories.
One cannot imagine machines needing stories told.
Only humans need that. And those who hear stories can also invent other stories. In fact, some of us can’t help ourselves.
So story will survive. Humanity will go on. There will still be human storytellers.
I’ll still be here, writing the old-fashioned way, regardless what the machines may be doing. Whether I’ll be paid is another question; but then, I’m not being paid now.
It’s not about the money. It’s about the story.
Keep reading. Keep writing.
Larry F. Sommers
Your New Favorite Writer
The writing of AI is merely a re-assembling of existing verbiage. AI can never be original. It may be able to provisionally answer “what if” questions, but it can’t come up with the questions themselves. I recently read a praise song text generated by AI. It had all the right phrases, but all were recognizably lifted from existing songs and rearranged, and not well. Good writing of any sort requires soul, spirit, and genuine originality. (Elon’s interview with Tucker Carlson is really interesting.)
Nice to hear from you, Helen! I think the thing that prompts the sudden interest in AI is that people see that the programs are now doing a bit better at “sounding like” human writing . . . and they are anticipating the programs will get better yet.
Hi, Larry… This is a fascinating, and somewhat scary, topic. Google chief Sundar Pachai recently said that AI will be more fundamentally transformative than fire or electricity. Some have compared its accelerating pace of development as akin to COVID—i.e. exponential, not merely linear. And yeah, we’ve even had a church member wonder about AI potentially being able to write a good sermon. (No word on whether it could deliver it with the flair or empathy of a human.) I highly recommend Ezra Klein’s podcast, “The Ezra Klein Show,” for more on this. He has had several episodes recently focused on various aspects of AI.
Thanks for your perspective and the podcast recommendation, Rob!