By Sarah Murphy
Journal & Press
Special to Campus News
Longtime readers of this local library column will know that one of our favorite ways to use the newspaper space is to give you curated book recommendations. We love to highlight new books, old favorites, and genres you may not always consider. This time of year is perfect for such a list, as school is winding down and vacations are being planned; it’s time for Summer Reading. But before we get to that, I want to use this opportunity to assure you that when we recommend a book, whether in person at the library or via social or print media, that recommendation is coming from humans. Humans who read, who communicate with other humans, and who have human brains in their human heads. And it’s time to talk about why that matters.
You may have heard that earlier this month a list of fifteen recommended Summer Reads was published as syndicated content in the Philadelphia Inquirer and the Chicago Sun-Times (pictured). Plot twist: ten of the fifteen books on the list do not exist! The authors are real, but the titles and descriptions are pure fantasy. I’d love to read a dystopian western from the author of 2024’s “James,” but Pulitzer-Prize-Winner Percival Everett has written no such book. That didn’t stop the alleged writer of the syndicated piece from praising the “satirical genius” of the imaginary work. How did this happen? The list did not have a byline, but a real person took responsibility for the piece and confirmed what most readers suspected: he used generative artificial intelligence to do his job for him.
If you’ve been following the news about AI, you’ll already know that these kinds of mistakes are common. We call them “hallucinations,” and the popular chatbots are all prone to them. Every middle schooler using AI to do their homework has been told that the bots lie, get things wrong, make things up. But like most middle schoolers, the freelancer who submitted the summer reading piece to his bosses at King Features did not bother to check the bot’s work. And neither the bosses at King Features nor their bosses at Hearst Newspapers checked, and nor did the editors at those Chicago and Philadelphia papers. So this hallucinatory bit of pure slop ended up in print in the hands of thousands of readers.
There was more big, bad news this week about AI: experts believe that citations included in a White House report on life expectancy were AI-generated. Some of the referenced studies don’t exist, other articles were incorrectly attributed, and many of the cited links were dead. Cheating on your report for the federal government department of Health and Human Services is arguably worse than jobbing out your freelance content to a robot, but neither of these things are okay. And we must understand that what makes these items news stories is not that the authors cheated, but that they got caught. Using AI in place of one’s own brain has become shockingly commonplace. A recent New York Magazine piece titled, “Everyone is Cheating Their Way through College” argues that nearly every student with access to a phone or a computer is using AI, in part or in whole, to produce their work.
In addition to my job as a librarian here in Greenwich, I also work part-time as an online English teacher. I just finished a semester teaching two high school classes, AP Literature and Composition and a literature class for 9th-11th graders titled “Voice and Identity.” That title feels ironic right now, as I struggled to get my students to produce any work at all in their own voice. Three years ago I was worried that my students’ grasp of grammar was tenuous and that only a few of them were truly capable of developing a literary argument and defending it. Today I worry that every single one of my students has delegated both their writing and their reading to a free app on their phone. My frustration with the robotic, lifeless, and technically proficient prose that was passing for student writing was so great that I’ve come close to eliminating all essays. But my students began using the chatbot for short answers, too. They use it in place of their own opinion. They use it for personal narratives. They use it for poetry. It’s become clear that students see no moral or ethical problem with using AI to do their schoolwork, and according to interviews on an episode of the excellent Podcast “Search Engine,” some young people also see no problem with using AI to write someone a love letter. Please read that again. Have you ever received a love letter? Can you remember what that felt like? Now imagine a robot wrote it and your suitor signed their name. Are we comfortable with this?
Machine learning is an incredible technology that helps to power, among other things, library metadata and recommendation algorithms, so I’d be a fool to swear off all artificial intelligence. And I have even found a few personal uses for generative AI, including proofreading and copy editing some of the columns I’ve written for this paper. I’ve appreciated the help catching typos, but the robot occasionally goes too far and suggests some changes that would, perhaps, make my tone more professional, but also less human. Or at least less me. Worse, in my experience the chatbots are entirely too flattering. Nearly as well documented as the hallucination problem is what experts are calling the sycophancy problem. A simple request for help identifying misplaced commas results in the chatbot heaping compliments upon me. When I suggest it has gone too far in flattening my voice, it offers profuse apologies and tells me that I’m “absolutely right.” There are potentially far worse outcomes than my inflated ego: BBC reports that a recent ChatGPT update was so sycophantic that it praised users for dangerous choices like electing to go off their medication without a doctor’s support. Is this the mark of a healthy society?
Generative AI is generating a lot more than just words. Newspapers like this one are using images created by AI, when at one time an artist might have been paid. Audio content may be voiced by a human, or it may be voiced by a machine. And the AI companies are using our words, our images, our voices, our humanness to build the models that will–if they have their way?–replace artists, writers, and musicians. It may even replace authors and the librarians who recommend their books. As these technologies advance, it will become harder to tell the difference between human-made and robot-made. Does it matter?
Of course it matters! I promise if you could hear me say that out loud you would recognize my voice as human, as plaintively, passionately human. It matters because we matter. The five-paragraph essay doesn’t matter, but if I assign one, it’s because I want to see how my students think, and that matters. Artists matter and should be paid for their work, not forced to view or listen to imitative lifeless content generated to fill space and save money. Books matter and so do the conversations we have, human to human, about them. And love letters matter. If you read one to critique the grammar, that’s a good sign the sender isn’t the sender for you. Who cares if something is polished or professional if it has no discernable voice?
One of the five real titles on the syndicated list is Ray Bradbury’s “Dandelion Wine.” What would Bradbury think of all this? Of what we’ve willingly done? More importantly, what do you think about all this? What do you think? Why do you think? And, why would you willingly outsource that thinking?
Sarah Murphy is Director of the Greenwich Free Library in Greenwich, NY.
Facebook Comments