The AI Writing "Boom" and the Myths That Surround It

LLMs Are Not AI, and They Certainly Aren't Toothpaste

In partnership with

About twenty years ago, I took a college class on Victorian Literature with Jerome Bump at the University of Texas at Austin, where I was an English Major. It was one of my favorite classes, where most days found us hiking down to look at the dinosaur fossils in Shoal Creek or sketching Littlefield House, a little-known Victorian house on campus. For our final project, we had a choice of writing a paper or a more “experiential” assignment: I chose to create a Victorian Literature Bot.

I can’t recall for the life of me what technology we used, but essentially, I fed the bot excerpts from Victorian-era poems and the history of that time period, and if you asked the bot questions, it would answer them using my inputs. Chatbots were at the intersection of hybridity back then, and they relied on humans to input their answers. Mostly, I found them funny. I snuck little jokes into my chatbot and spent hours more than I should have on the project. What would have been a dry research paper full of quotes became a bot with a snarky Holly personality. At the time, I think I had a fairly optimistic outlook on technology—most of which my parents didn’t understand. Cyberpunk ruled in my world—so much so that I called my first blog “Sylphack”—a combo of the Sylph myth and “hack” for hackers.

We’ve come a long way from my little chatbot, who mostly quoted Gerard Manley Hopkins, to today’s so-called AI.

The current existential threat to creatives is Artificial Intelligence, also called Generative AI, or more accurately, Large Language Models. Some have argued these tools are beneficial to writers and artists because they remove access issues, while the majority of the writing and art community see them as being similar to NFTs: Deeply problematic and likely to fail. Reactions range from cringeworthy milquetoast to “delete all Google products, burn my computer to the ground, and go live in the woods.” (Honestly, I can’t blame those who want to do the last one; I think about the nuclear option daily.)

This post is going to be an attempt to synthesize my thoughts and feelings on the topic as they stand right now, knowing full well that the landscape may change drastically and rapidly. There are a couple of myths and misunderstandings about AI circulating in the creative community right now that I want to discuss. The reality is AI is a gray area, and how we treat it in the years to come will have massive impacts on the livelihood of creative people.

LLMs Are Not True Artificial Intelligence

What changed between twenty years ago and now? Why were chatbots acceptable to use when LLMs are not? Well, the main difference between the chatbots of yesteryear and today is data. The way ChatGPT and other LLMs work is by using a neural network to process data in a way that mimics how the human brain works. Also called “deep learning”, this technology is responsible for some of the most fascinating breakthroughs of the last decade. Neural networks allow computers to learn through data.

This technology got its start in visual learning and image recognition. For example, if you feed a computer a ton of pictures of cats and label them as such, in theory, the network would soon be able to identify the cats. It does this through “neurons” or nodes that process parts of the larger question. For example, each node might identify different parts of the cat like a tail, whiskers, paws, etc. Using mathematical equations and different weighted numbers, the nodes connect in a network. (The best simple video I’ve found on this is here on YouTube).

In this way, today’s LLMs are language processors. They have been fed a ton of publically available data, and they use that data to mimic language, predicting sentence structures. When you ask an AI to “write a short story”, it is using all the data available to put sentences together in a way that mimics a short story. Microsoft Data Scientist Andreas Stöffelbauer explains that AIs are trained to generate “‘human-like’ text, not true text. Nothing indicates truthfulness to LLMs” (Data Science at Microsoft). The LLM recreates the patterns of what it knows to be a short story and then humans training the LLM reinforce that learning. It’s a far cry from what we think of as Artificial Intelligence: A machine that can learn and think independently.

One argument I’ve seen made in the science fiction community is that AIs are science fiction, so we should embrace them. But I think this ignores the large issue: One, you can’t define the AIs that are actually LLMs as such today, and two, science fiction, for the most part, has set out to warn us about AI, not to embrace it. From IRobot to Ex Machina and beyond, most SF uses the slippery slope argument to explore AI to its extreme possible outcome: And it’s not often good because humans aren’t often good. Humans are often greedy and make mistakes. Science Fiction saw AI as just that: An artificial intelligence that can eventually gain sentience when taken to its logical outcome.

So the argument that because AI is a new technology, it is somehow good is fairly ungrounded in the history of Science Fiction. This is the downfall of most technology humans create: We often don’t think through its eventual ramifications, and science fiction exists to make those connections.

Another myth to bust is the question of whether LLMs are similar to other technology-aided tools like Grammarly or spell check. Early Microsoft Word spell-check essentially used an internal dictionary to check every word against the “correct” spelling. For this reason, it was often inaccurate, although it did have a function to allow users to update the dictionary as needed.

Grammarly may have an AI function included now, but that function was only recently added around 2017. Grammarly was first created in 2009 with the goal of helping students avoid plagiarizing (it started as a tool to detect plagiarized work). What seems to be the key to the success of Grammarly’s spell-check function is that it capitalized on its large pool of users, asking them to contribute to whether a spell-check was correct or not. So, even as early as its beginning, Grammarly was using user data to make a better product.

It’s impossible to tell how Grammarly has used data now (was it just feedback, or was it also the text users put into the program?) because, like most tech companies, they aren’t forthcoming in details, and while they may couch their use of that data as “user feedback,” we can likely assume, like most tech companies that they have been using that data from the beginning. However, Grammarly separates its AI content from its spell checker, so this may give some the idea that they are different.

Even the so-called “safe” spell checkers are built on user’s data and copyrighted material.

Data Used in LLMs Infringes on Copyright

I’ve written before about how art that challenges the status quo is often seen as “not art.” Modern art, from Andy Warhol to Roy Lichtenstein to Mark Rothko, challenged what we know art to be. Artists who remixed or collaged or somehow worked with found objects or in reference to other artists were often seen as thieves, and this idea continues in today’s conversation surrounding found poetry and erasure poetry.

So, how is AI any different? The LLMs and generative AI of today are trained on data input. Data is given to the machine, and it learns based on that data. But unlike my Victorian literature chatbot, the data used by big tech companies to train generative AI is not created by one human, nor does it have a bibliography of where the data came from.

You’ve probably noticed that in an internet-based world, every website you view has cookies that “track” your data. Every software you sign up for has a very long user agreement you sign that generally has guidelines for data use. But that wasn’t always the case. For a long time, people accepted their data would probably be stolen. But the fear of having say, your social security number stolen, was generally kind of nebulous because there was often no way for hackers to actually use or mine that data. It was just so big. And the majority of data hacks in the early 2000s didn’t result in direct impacts for most people, so they were dismissed.

In 2016, when ChatGPT first came on the scene by publishing research about its generative models, it was using image databases like ImageNet—a list of web images first compiled by undergraduate students for $10/hr, and then later by Amazon Mechanical Turk, a crowdsourcing labor pool of people working for pennies to label images a la those “Are you a human?” tests you get when you first log in to a website.

However, it is very difficult to find any information historically on WHERE ChatGPT has gotten most of its images and text. Like most scientific endeavors, AI researchers didn’t stop to think if they should use publically available data—only what it might make possible. From the perspective of AI companies today, revealing those data sources puts them at risk of having those sources taken by a competitor.

Herein lies the main ethical problem of today’s AI. It’s built off the back of underpaid or, more often, unpaid labor, and I don’t mean just college students. The reality is that most of the data used by AI companies is copyright “data” (artwork or writing, as an example), which is then being used to create new artworks that remix the original with no attribution or credit to the creator—which is then being sold for corporate (and individual) gain.

AI companies have asserted that the creation of AI would be impossible without the use of copyright material. They argue that AI falls under the fair use doctrine of copyright, which considers a few factors:

  • Purpose and character of use: Is the use commercial or non-commercial, and is it “transformative”?

  • The nature of the work being used: How much of the work is used, and what is the importance of that used work in context?

  • Effects on the market: Would the use harm the future market for the original work?

With most collage art, found art, found text creative writing, and erasure poetry, most of these parameters are met. That kind of art is unlikely to be confused for the original, it is often “transformative,” i.e., creating a whole new piece of art, usually the original work is excerpted in small amounts, and there is little argument that those types of art can harm the market for the original work.

In the case of AI, opponents can argue that:

  • The use is commercial: Most public AI began as a free experiment but has quickly devolved into a pay-per-play format. In addition, AI companies have begun paying for data as a way of avoiding lawsuits.

  • The use is in entirety: Most AI companies are using the full text of work/data, not just excerpts.

  • The use is harmful: AI is being used for quick commercial gain and harms artists’ abilities to find work.

The only potential difference here is that, from my view, AI is transformative: You can’t often recognize the original referenced work an AI is using because it’s a conglomeration of datasets.

Let’s take a step back and imagine a Utopian landscape. What if AI companies HAD asked creatives for permission to use their work?

Let’s say a set of writers wanted to pool their work into an experiment. They give their books to a data company and allow them to create AI from them. In turn, the writers are paid a portion of the proceeds from the project. That is an experiment I would be interested in seeing. What would it look like if AI was used responsibly and with permission?

Well, I personally feel it would be kind of like today’s music streaming model. Most music artists don’t get a lot of money from streaming. It has shifted the focus for musicians on touring and merch, making the actual creative object of a downloadable song, not the main revenue stream.

Of course, this would probably never happen because the AI industry simply does not see the data it is using as a creative endeavor with value. The reality is that our society has already very strongly devalued creative work, making it impossible for most creatives to make a living. STEM is king, baby.

They will never put a poet on the moon.

AI Is Accessible . . . Or Is It?

One of the breakthroughs in AI that I’m actually fascinated by is its use in disability and access. AI has been used to create more responsive prosthetics, improve reading skills and text-to-speech or translations, map wheelchair-accessible spaces, and so on.

On the other hand, AI is not generally designed with accessibility in mind. A research study by Maitreya Shah at the Berkman Klein Center for Internet and Society recently found that most AI developers view data related to disabilities as “outlier data” that is often excluded. Shah also found that most AI was not created with disabled people involved in high-level decision-making. (Harvard Gazette). In my research, I found very little AI actually being used in real-life accessibility situations that didn’t also have some limitation or failure, which is frustrating given the potential.

I think that when asked if their work could be used in AI for the purposes of disability and accessibility, many creatives would agree to that use. It’s not that we don’t want technology to help people—it’s that often, the people creating the technology don’t have that goal in mind.

Most of the tools AI purports to have invented—like text-to-speech—were tools already being used before AI as it exists today. So, while it may seem easy to say: “AI is good for accessibility, so we should use it,” the truth is way more complex.

We have to be able to hold complexity about these topics. Could we create accessibility AND credit the artists behind the data? Is there a future where technology puts those most vulnerable first, ahead of corporate profits?

The frustrating part of the accessibility conversation is that I don’t think most people with disabilities want new technology at the risk of damaging creative people’s livelihoods. It frames the discussion in an “all or nothing” way, which is really harmful on both sides. Of course most people want people with disabilities to have more access. Of course people with disabilities aren’t interested in ethically bankrupt access tools.

I Hate to Break it to You, but AI Is Bad at Art

LLMs and generative AI are merely mimickers of language or visual imagery. Because of this, they are generally bad at creating writing or art that is “good”—a subjective value based on many factors.

The easiest example is AI’s propensity in generating art to add digits to hands and make visual leaps in logic that result in uncanny portraiture.

In writing, AI tends to write in a very formal tone, generating nondescript, boring text. For example, if you ask ChatGPT to write a poem, it will generally create a rhyming poem like a sonnet. Why? Well, my theory is that most poetry publicly available online is older poetry or public-domain poetry. Contemporary poetry—which is what most poets who want to get published should be aiming to write—is rarely online. It defies eBook formatting. Most contemporary poetry got its start in print books or chapbooks. So, the history of how we got to a free-verse poetry world is lost on ChatGPT.

Creativity, like writing and art, requires years of practice to get to where the output is “good”—or, more accurately, to where that artwork speaks to its audience. I’ve seen a lot of discussion online that AI “removes barriers” because it allows people to create art without needing to attend art school or have a formal art education. What this does not acknowledge is that you can still become a successful artist without a fancy education. There are many creatives who have made a career without access to those institutions.

Interestingly, I’ve also seen this compared to self-publishing. Print on Demand publishing has radically changed the landscape of book publishing, for good and for bad, depending on who you ask. On the one hand, the ability for writers to get their work into the world has increased massively. This means that underrepresented voices and previously taboo works like erotica now have the following they deserve. The big book publishers are often predatory, quickly becoming a monopoly that privileges the same old voices. On the other hand, Amazon, the main venue for book-buying, is a wasteland of AI and stolen works.

A more convincing argument is that we’re seeing an increasing silo-ification of knowledge. AI could potentially reduce barriers to learning, allowing people to access information at their fingertips.

You know, like the internet. Or libraries.

This argument misses the fact that AI is still pretty bad at its job. With AI being rampant in the errors it creates, we are ignoring that AI, as we know it, wasn’t meant as an information distributor. It wasn’t meant to create law briefs, and maybe it shouldn’t be used to detect cancer.

Recently, there has been an influx of AI spam submissions to major online SFF magazines like Clarkesworld and Uncanny Magazine. Two things are happening here: AI techbros who want to convince writers it’s an easy way to make a quick buck are targeting short story markets via YouTube training videos as a way to boost their subscribers (Earn $10k a month making viral short stories with AI!—as if it were that easy.) Secondly, AI generally has a text limit. So, publishers who are accepting book submissions are less likely to get those AI-generated works, while short fiction publishers who are open to public submissions often get spammed with thousands of works.

Of course, this harms the writers who have put their hard work into writing short fiction because it means markets will have to figure out if a work is AI or not, determine how to weed out the fake stories, and close for submissions in short windows that make it challenging for writers to send in their work.

The gray area here is that SFF is often a whisper network of rules, guidelines, and barriers for new writers. Most people interested in writing short fiction probably haven’t considered the ramifications of using AI. Heck, I think most people don’t understand how AI works. And new writers, often excited about the creative aspect of these tools, are re-radicalized by the Get Rich Quick scammers when the community immediately accuses them of being trash for using AI. This pushes those new writers into the spaces where AI is accepted because, after all, who wants to be told they are wrong? It’s the same story of new writers thinking writing a book is a quick way to make money, not realizing that writing is, in fact, hard.

AI Detectors Are Also Bad, Y’all

Okay, we get that AI is bad. So let’s ban it! 😬 The last myth I want to debunk here is that AI can be reliably detected. With recent backlash in the writing community over AI and its impacts on creatives, I’ve seen a very disturbing trend of people taking a morally superior, high-ground stance against AI and claiming that they are “experts” at detecting AI.

The unfortunate reality is that nobody is an AI-detecting expert. What makes AI so insidious is its ability to be tweaked to where, depending on the user’s skill set, it is virtually impossible to detect.

Most AI-written stories and art are pretty recognizable because, as I’ve said, they are bad. LLMs use similar patterns of language based on the data they have, and those patterns become quickly obvious when you’re reading or looking at art generated by AI. But what about AI used to correct grammar or spelling? AI used to generate a prompt? AI used to create a story that the writer then rewrites or revises? There are endless ways AI can be used and no real good way to detect it.

AI detectors, or websites that claim to be able to detect AI written or drawn work have cropped up as a secondary monetary stream by companies looking to capitalize on the trend. And people who want to be anti-AI champions have been, in my opinion, far too fast to use them to try and out AI users.

Recent studies have found that AI detectors don’t work. Studies have found that AI detectors are inconsistent and often create false positivesare not accurate or reliable, and, worst of all, are biased against non-native English writers.

However, this isn’t the least of the problems with AI detectors. Most research has also found that AI detectors, by their nature, rely on AI. It makes sense that in order to find out if something is AI, you have to run it through AI. Furthermore, who is to say that AI detectors are not also collecting data on the text put into their programs, saving that data to use later in AI?

This means that any solution to the AI problem that relies on AI detectors is problematic. By putting someone else’s work into an AI detector, you are exposing their work to the risk of theft and misuse.

Let’s say you decide like some writing organizations have suggested, to ban AI writing from a major writing award. How do you prove that work was AI? Do you make the writer show receipts in the form of early drafts? Do you force writers to track their writing? What about writers who work by hand in first drafts or writers who don’t have a good tracking system for their files? The rabbit holes this gets into are exhausting. The only reliable way of telling if a piece of writing used AI is by asking the writer. And if someone doesn’t see AI as an ethical issue, they are unlikely to be honest about AI use.

“The Toothpaste Is Out of the Tube”

AI proponents want you to believe you are somehow missing out on something by not using AI. It’s the YOLO of creativity, meant to convince creatives that their work could be better and their processes faster by dangling the ever-elusive carrot of financial success in front of them. The phrase I’m currently exhausted by is, “The toothpaste is already out of the tube with AI, so you might as well accept it.”

The reality is that most creative adventures take time, energy, and, yes, often money to succeed. Most people don’t want to put in the effort. When confronted with the ethical quandary of AI, most people also don’t want to put in the effort to see a future where creatives gain what they actually need: a larger audience and support.

The toothpaste quandary mystifies me. It’s like any other large issue of advocacy. Do we say, “The toothpaste is out of the tube, so we shouldn’t fight for racial equality?” or “The toothpaste is out of the tube, so we shouldn’t fight against climate change?” Hell no. Are we really this nihilistic?

Acknowledging that AI is bad means also making the personal choice not to use it—but it also means being a voice against that technology. Like any issue dealing with corporate greed and the capitalization of unpaid labor, the solution is advocacy. We need more regulation on AI and a government response to AI’s threats to copyright. We need AI companies to be forced to compensate those who have provided the data to make their product saleable.

And we are already seeing just that. The number of lawsuits and government actions against AI increases each month. The New York Times has reported that AI companies are seeing training data sources dry up as more and more sites listen to customers who don’t want their data used to train AI.

Writers who are all too eager to use AI without thinking about the ramifications (after all, it’s already out there, right?) may face copyright lawsuits against their work. This post from law firm BakerHostetler outlines the current legislation regarding AI and copyright,

If we can create AI, we can definitely MacGyver that toothpaste back into the tube. Or at least, I have to believe that advocacy matters and that we can make change in the world.

I’m Back to Hiding in the Woods

This post has gotten way longer than I intended, but I hope it has been helpful in sussing out some of the problems, myths, and issues surrounding AI and creativity. From my perspective, I’m skeptical of AI with the hope that we can find technologies that are more ethically created. But I’d love to know your thoughts. Leave them in the comments below.

Love my newsletter? Upgrade for just $25/year and help me keep this show running:

Browse Upcoming Workshops from Holly Lyn Walrath

Write what you love, love what you write.

Upcoming Workshops from Your Host with the Most Writing Prompts, Holly Lyn Walrath

Confessional Poetry
DATE: 4 Weeks Starting April 7th, 2025
TIME: Asynchronous, Self-Paced via Writing Workshops
Price: $299

Where does the line between poet and poem blur? The poetry of Sylvia Plath, Anne Sexton, Robert Lowell, Randall Jarrell, and Elizabeth Bishop in the 60s, 70s, and 80s became iconic for its controversial use of the “confessional voice.” This genre has arguably shaped contemporary poetry today. In this workshop, we’ll explore what it means to write a confessional poem, but also, how poets can harness personal experience to reach an ideal reader. This workshop juxtaposes classical confessional poetry with contemporary poets who have harnessed the power of trauma to make the private public. Break down barriers, write with authenticity, and embrace the catharsis of confession. 

National Flash Fiction Month: 30 Short Stories in 30 Days 
DATE: 4 Weeks Starting July 1st, 2025
TIME: Asynchronous, Self-Paced via Writing Workshops
Price: $299

This generative workshop is chock full of 30 writing prompts for short story writers. Whether you write micro fiction, flash fiction, or short stories, these 30 prompts are meant to inspire and support you in this unique writing challenge. You've heard of NaPoWriMo (National Poetry Writing Month), where poets write 30 poems in 30 days, and you've probably heard of NaNoWriMo (National Novel Writing Month), where writers try to write a novel in a month. Now, you can do the same with short stories. Whether you're writing to a specific theme, assembling stories for a collection, or want to try writing a series of connected stories, this workshop will explore new contemporary structures like The Tryptich or The Wikipedia Entry. Open to writers of all genres--from realism to memoir to speculative fiction. Please note: This class has sold out every time I have offered it. I suggest you register early!

Self-Editing for Writers
DATE: 4 Weeks Starting March 3rd, 2025
TIME: Asynchronous, Self-Paced via Writing Workshops 
Price: $299

The best editor for a story is the author who wrote it. Every writer is different, and how you approach revising your work can vary based on the project. The key to self-editing is to see the bigger picture. Explore techniques for self-revising with step-by-step guidance from a freelance editor. Learn about the different types of editing, from developmental/content edits to copy/line editing and proofreading. You'll develop a personalized editing checklist that you can take with you from project to project, tweaking as you go.

Writing the Speculative Novel
DATE: 4 Weeks Starting May 5th, 2025
TIME: Asynchronous, Self-Paced via Writing Workshops Dallas
​Price: $299

Learn how to write (and finish) a speculative novel from outlining to revising to submissions.
Learn tips from a freelance editor who has worked with successful speculative writers to edit their books to perfection. With over ten years of experience in editing both self-published and big fiver writers, I know what works and what doesn’t when it comes to longform writing. In this class, we’ll explore techniques for outlining, critiquing, and revising the speculative novel. Learn how to create your own outline that you can re-use for future projects. Learn how to take on revision from the big picture to nitty gritty proofreading. Craft your book so that it has the best possible chance to get published!

DATE: 4 Weeks Starting September 9th, 2025
TIME: Asynchronous, Self-Paced via Writing Workshops
Price: $299

​Publishing survives on the work of editors. If you’ve ever considered becoming a freelance editor, this workshop will give you the tools needed to get your business started. Learn about the different types of editing, how to structure your editing business, and what resources exist for freelance editors. A nitty-gritty, in-depth guide to becoming a guide for writers.

Self-Paced Workshops (Sign Up Anytime!)

Self-Paced Course: 30 Poems In 30 Days
DATE: Ongoing
TIME: Asynchronous, Self-paced via Poetry Barn
PRICE: $149
This class came out of NaPoWriMo (National Poetry Writing Month), which happens every year in April. Similarly, the goal of this self-paced class is to write 30 poems in 30 days. However, you might write one poem a day, or several poems in a day, and then give yourself a break. It’s totally up to you! Whether you’re writing to a specific theme, assembling a group of poems for a chapbook, or you want to try writing a longer poetic sequence, this workshop is meant to support you with generative prompts and experiences to get you creating plenty of new work.

Self-Paced Course: Journaling for Poets
DATE: Ongoing
TIME: Asynchronous, Self-paced via Poetry Barn
PRICE: $99
Poets are observers. One way to keep track of your observations and ideas is through a writing journal. In this workshop, we'll cover the basics of journaling for poets, not just as a method of processing and keeping track of your thoughts, but as a method of improving your writing life and working towards a career as a writer.  In this workshop, you'll cover how to manage large ideas or projects, track submissions, create goals, revising, and more, all while exploring popular methods of journaling to find the one that works for you. If you feel out of sorts or disorganized in your writing life, this workshop is for you!

Self-Paced Course: Queer Poetics
DATE: Ongoing
​TIME: Asynchronous, Self-paced via Poetry Barn
​PRICE: $99
This workshop is an intersectional primer on LGBTQIA+ writers throughout the history of poetry. We’ll explore poets like Walt Whitman, Adrienne Rich, Allen Ginsberg, and Audre Lorde, but also the contemporary queer poets who have catapulted into the mainstream like Jericho Brown and Danez Smith. We’ll write poems alongside and inspired by the voices of queer poetics. This class is meant both for writers who want to explore their queerness and for writers who want to learn more about the history of queer poetry.

Self-Paced Course: Writing Resistance Through Erasure, Found Text & Visual Poetry
DATE: Ongoing
TIME: Asynchronous, Self-paced via Poetry Barn
PRICE: $99
Hybrid poetry forms can be a powerful form of resistance. From Jerrod Schwarz’s erasure of Trump’s inaugural speech to Niina Pollari’s black outs of the N-400 citizenship form, contemporary poets are engaging with the world through text, creating new and challenging works of art. Heralded by the rise of the “Instapoet,” visual works are a way to take poetry one step further by crafting new forms and structures that often transcend the page.

And now a word from our non-robotic sponsor…

Use a Book to Grow Your Brand and Bank Account

Bring your ideas to life with Lulu.

  • Print high-quality books on demand

  • Sell directly to your audience using ecommerce plugins

  • Retain 100% of your profit and customer data

  • Get paid immediately

Reply

or to participate.