I Need Your Clothes, Your Boots, Your Training Data
When ChatGPT came out a few years ago, my initial reaction was apprehension. I kept hearing about how "game changing" it would be, which was annoying. But honestly, I think a lot of my hesitation was driven by fear. There are countless movies, books, and TV shows that have explored the existential danger of AI. The part of my psyche that was raised on 90's action films just wasn't ready to engage with something that, at first glance, seemed to be bringing the world closer to SkyNet.
The first time I used ChatGPT was to fire a lawn mowing service. I was uncomfortable with the interaction so I offloaded the emotional overhead to AI. It was great. I unlocked a superpower: more emotional avoidance!*
After my inaugural LLM use, I slowly onboarded AI tools into my daily workflow as a software engineer. Most significantly, these tools have changed how I learn. When I use an AI coding tool, I don't just take what it gives me. I ask it why it made a particular choice, then go read the documentation to understand the underlying ideas. It hasn't replaced the learning, but it's given me a faster feedback loop for the journey to understanding. On the other hand, with certain tasks I find myself spending more time wrestling with the LLM's output than it would have taken to just write the thing from scratch.
This kind of hands-on discovery has been more valuable than anything I've read or heard about AI. But I'd be lying if I said it gave me clarity. Using the tools cuts through some of the noise but it also gives you new reasons to be impressed and new reasons to be suspicious. Sometimes even in the same interaction.
That ambivalence has made me pay closer attention to what I see as a strong narrative divide in the AI space. A flood of blog posts and LinkedIn thought leadership advertising these tools as transformative. Mirrored by a deep skepticism fueled by real concerns that are difficult to dismiss. Both sides have substance. But here's what I've noticed: the hype tends to be fueled by successful engagement with these tools, and the skepticism tends to be fueled by failed engagement. One side walks away with inflated expectations, the other with missed ones. This reflects my own experience too: simultaneous over and underwhelm.
The reason the hype is so effective is that these tools operate in language, and language is how we make sense of the world. No one gets inflated expectations from watching a model do matrix multiplication. They get them from a fluent, confident response that sounds like it came from someone who knows what they're talking about. LLMs leverage that instinct. Verification is extra work, and we don't consistently do it in any medium. And by we, I mean me.
The existential fear is the one the zeitgeist hands us. But the version of AI that's actually reshaping how we work and think doesn't look like the movies. It's here, and becoming increasingly normalized. How we perceive these tools is actively shaping how they get used and deployed. That's worth examining carefully.
I use these tools every day. I find them both incredibly useful and deeply unsettling. This series is my attempt to hold both without collapsing either one.
* It's entirely possible that's been my superpower all along.