The slip into losing my confidence in my creative work wasn’t sudden. My use of AI had been ongoing for several years, and I was well-aware that it had been bordering on excessive, maybe even obsessive. If cigarettes were AI, I was a one-pack a day or more kind of guy. I knew something about my writing practice no longer felt right, and when I’d re-read my work, it was like reading the work of a stranger. I could no longer see myself in it.
My descent into the feedback loop started with curiosity. Just me using ChatGPT to help generate titles or other superficial things. Soon, I was using it for brainstorming, generating character flaws, and rewording short bits of text for clarity and flow. And then I sought AI out to rewrite my work but in my style. AI’s revisions felt like they were the better version of me, and ChatGPT became my personal editor, that I couldn’t stop asking, “how’s this?”
I can’t say exactly when it occurred, but there was a moment when I realized that I was too heavily reliant on AI to fill in for my own judgment on my work. My measure of progress or completeness was placed in the non-existent hands of AI. I would share my work, it would provide feedback, and I’d re-write the piece to suit the feedback given. This is how it began, the feedback loop and praise that I couldn’t let go of.
That’s what kept me coming back to ChatGPT—feedback and praise. Even as I’m editing this article for Substack, I find myself seeking praise in Claude (Anthropic’s AI model, which has a different setup of defaults and principles than ChatGPT). Despite having a protocol set up that prevents it, sometimes, I just need someone to tell me I’m doing a good job, and AI offers that readily.
Why is any of this important? I mention my personal account of using AI because I know that there is a lot of confusion around this topic. The general public doesn’t know how AI functions, and there’s a lot of room for harm that comes along with that, and that’s something I’d like to help counteract.
This series will cover the whys of how AI works, why it’s an unreliable narrator, why it reflects ourselves more than the truth, and more on why it’s difficult to stop using once you start. What this series won’t do is entertain any sort of moral discussion around AI. There’s enough of that sort of discourse and my voice won’t be added to it.
Goodbye for now.
