AI is already making people less creative
Two recent pieces of research come to the same conclusion, as does common sense
I previously wrote how AI would split the world of knowledge work (and beyond) into a two-tiered society, best exemplified in H.G. Wells’ The Time Machine, where everyone joins team Eloi or Morlock. You can read it here if you missed it. This wasn’t a prediction for 100 years from now, I noted this would start to happen immediately in the near-term and create an increased splintering of the population over time.
To reiterate, this thesis is not luddite at all, I think tech can be really useful applied to creative work. I make music with software, after all. But there’s no way AI in particular doesn’t split us along these lines, because like many innovations it’s a double edge sword. Yes, you can use it intelligently. You can also use it mindlessly or in ways that weaken you, as that story goes through (I also wrote on how AI might make you nihilistic, which is separate, but ultimately I think anyone who becomes nihilistic is metacognitive enough to come around to applying it more thoughtfully, as they crave real meaning).
Anyway, some recent studies come to the same conclusion. A 206-page research paper from Cornell just published this month is in full support of this thesis. In one sentence: "LLM users consistently underperformed at neural, linguistic, and behavioral levels." This study found LLM dependence weakens the writer’s own neural and linguistic fingerprints. Relying on EEG, text mining, and a cross-over session, the authors show keeping some AI-free practice time protects memory circuits and encourages richer language even when a tool is later reintroduced.
Separately, Microsoft of all companies also just published research, in collaboration with Carnegie Mellon, warning that workers who rely heavily on generative AI tools like ChatGPT can experience a decline in critical-thinking skills such as analysis, evaluation, and original writing. You can check out the whole paper here.
The study found that as AI use increases, some knowledge workers begin to defer to the technology without verifying its output, often due to time pressures. This suggests a future where humans act more as AI supervisors than creators. While this may seem counterintuitive for a company investing (probably way more than) $80 billion in AI, Microsoft sharing this reflects a broader concern of AI’s cognitive trade-off of being useful while (unintentionally) lobotomizing the workforce. An admission we need further work to ensure AI enhances rather than diminishes decision-making.
One of my favorite writers, neuroscientist Eric Hoel shares similar thoughts:
On to my worry. Which is obvious, at this point. It could be phrased as: given that we suddenly have very convenient time-saving tools for producing with ease content like writing, or art, or even ideas themselves, I do not expect this to actually improve in quality the output of artists, writers, thinkers, etc. In fact, I expect it to paradoxically decrease output in quality. For if the world is a museum of passion projects, everything that sets us at a distance from such projects, that places us less in contact with the demiurgic vital swells of human creativity, is a thing with hidden artistic costs, no matter how temptingly easy it then makes the process as a whole. Generative AI seems a perfect expression of this trap. Just blab out a first draft. Finish it in post. Or come up with an initial brainstorm. Let the AI flesh it out. And then we all become some version of CGI-obsessed late-career Peter Jacksons or George Lucases sitting there with our blinking chat windows, unable to produce anything of value, surrounded by only a green screen for human thought itself.
All of this to say I think there’s ample evidence today to vindicate my Eloi/Morlock thesis. I don’t even need the 100 page research studies, I see it playing out just by browsing my social feeds, watching AI video/music/image renderings people make, reading sales pitches obviously generated by AI, and in countless other places.
You will be caught on the wrong side of this divide if you aren’t careful. This might be a situation where people who already developed craft in a world before AI have a creative and intellectual advantage, just like those raised in a world before smartphones have an attention advantage, as social psychologist Jon Haidt goes through in many talks such as this one.
But anyone who puts in some work can use all these things well (social media, mobile, AI). Just understand deeply what you are using, how it’s affecting your thinking/creative output and if it’s even accurate at all. This requires you to be honest with yourself, which is difficult for many in modernity. Just like ~42% of adults are physically obese, I fully expect an intellectual obesity epidemic as well. Then again, we had this before AI, just look around, but we are further accelerating it.
It's the new variable reward system, and it's gonna hook the hell out of us. Another layer in the casino-fication of everything. My recent take is this is all slop, and they know it, all they want to do is sell chips and compute, because that's all the innovation they have left now. The plumbing. What goes in it? Who gives a rat's ass.
This is an excellent analysis...reminds me of the movie Idiocracy. I have been guilty of just using technology instead of really thinking about how it operates and why it operates. We are ruining our lives by making it too comfortable.