An AI Trap: Between Optimism and Artificial Competence

Many of my first jobs are no longer jobs. Not for humans, anyway.

The summer I turned 17, I was a deeply tanned meat abacus for a seed genetics company. I walked through fields and counted cornstalks. That was the whole job. During those months, I counted in my sleep, woke with the phantom feeling of a dowel clicking in my hand as I tapped each stalk. Shiver.

In college, I moonlighted at a mail order call center. That job ate souls. Timed bathroom breaks. Toxic callers eager to berate someone, anyone, for late orders. I followed a decision tree, recited scripts, and before ending calls, pitched a credit card with an unmissable limited-time offer you just had to hear to believe. I was exceptionally lousy at this last bit and never sold a single card.

Those jobs, along with several other early jobs I had, no longer require warm bodies. Drones, IVR systems, and self-checkouts have swallowed them whole. Humanoid robots are poised to handle more of the physical work, while AI agents (or whatever we end up calling them) will handle the less visible digital busywork.

Most people grasp this shift. They have similar stories. There has long been widespread concern that technology — and now AI — will eat human jobs, particularly in manufacturing and transportation. It already has. It will. And it will continue to do so at a faster pace.

Optimism About the End of Some Work

Job automation through AI is inevitable, but not inherently negative. Just as engines and tractors enabled my ancestors to work fields with exponentially greater efficiency, someday our children will look back and flinch at the tedium of work we do now. It’s not us versus AI; it’s us plus AI.

Relative to our worklives, optimism is the only rational perspective. AI will enable more people to do less rote, menial, and mundane work, both mentally and physically.

Blue- and white-collar work will still exist. If anything, we may see equal or greater pressure on white-collar jobs. Many of these positions revolve around middle-managing, intermediating communications, generating impotent forecasts, analyzing spurious data, and facilitating pageantry. People are already using LLMs to great effect to “complete” such empty work.

I’m hopeful AI can replace work that wastes human life (take many aspects of my first jobs, for example). But there are ways in which AI is creating its own form of waste.

Insincerity and Artificial Competence

You know the feeling: You’re talking to someone face-to-face, and you sense them faking something or trying to deceive you. You feel it in your gut. You are good at detecting insincerity. So is everyone else.

The same goes for writing. This is worth repeating: Other people are just as good as you at sensing insincerity. Once you lose credibility, you lose attention, and you lose the potential to be understood.

It’s glaringly obvious when something is written by an LLM with little original source material. The social slop, the LinkedIn broetry. The plastic and benign companywide email. That project brief with exquisitely structured headlines and bullet points, but zero substance. Though I’m often too chickenshit to call it out when I see it (especially at work), I’m noticing more and more AI slop with proportionately less meaningful communication. Less substance per word.

I write, so I’m hypersensitive to AI slop. And I probably feel threatened by it, because sometimes it can do a passable job with slim guidance. Still, when I see someone attempt to broadcast slop as something novel, to paraphrase Logan Roy, I think “This is not a serious person.”

Some observations about AI’s impact on human capability:

  • AI can lead us to overestimate our competence, especially in domains we don’t understand, because LLM-generated content appears polished and compelling. Paradoxically, some people are using LLMs to rapidly upscale their output while their core capabilities atrophy.
  • It’s always been easier to broadcast a thought than to guarantee it would be understood. Now, with GenAI suggesting what we should say, there’s even less of an expectation (and perhaps incentive) for us to understand what we’re thinking.
  • When it’s cost effective, we use technology for the stuff we suck at, don’t like, or don’t understand. This is great! It’s why LLMs can already be excellent tutors, editors, researchers, coders, brainstormers, etc. But there’s a dependency trap. Those with strong fundamental skills and use AI to amplify those skills will leapfrog their peers who rely on AI to mask missing core competencies.
  • Skill/output inequality will accelerate. With AI, the rich will get richer. But the merely productive will have an opportunity to become prolific.
Posted in