In partnership with

A few weeks ago, I was flown to NYC for an on-site interview for a Content Marketing Manager role at a high-growth startup that builds AI coaching and intelligence tools for field sales.

Part of the process involved presenting a content piece I would create for their audience, covering the goal, production timeline, and use of AI. I proposed an original research report combining the company's proprietary data from over 2.5 million recorded sales conversations with customer and SME interviews to reinforce the findings. It would have been great for this company to cement its category authority.

But none of that mattered.

When I got to the production timeline (which I proposed at 12 weeks, which is reasonable for a project of that scope), the Head of Customer Success asked, "Is there anything we can cut here? I was able to do something similar in 8 minutes with Claude Cowork."

So I asked, "Why are you hiring a Content Marketing Manager then?"

"Oh, for project management and editorial oversight."

What followed was a speed-versus-quality conversation, leaving me fielding questions like:

  • “Would you publish something that was 70% there?”

  • “Could you produce 80+ pages of content in a month?” 

  • “How much would the content [of this report] actually matter?”

Needless to say, I got rejected from the role, but it told me everything I needed to know about how we view and use AI and where it’s heading.

Cognitive leverage vs. cognitive substitution

After this experience, I realized that we’re not debating whether to use AI in our work. We’re in a debate about what work is for:

  • Work is an output problem. The goal is to produce as quickly as possible. In this worldview, friction is waste and struggle is inefficiency. AI confirms what they already believed: the thinking part was always the bottleneck, and now it doesn't have to be.

  • Work is a thinking problem. The output is an artifact of a real cognitive process. The struggle isn't the obstacle, but the work itself. AI can be a powerful accelerant in this model, but only when the human behind it is actually thinking. Otherwise, you haven't worked faster. You've just produced something faster. Those are not the same thing.

This fracture is showing up everywhere: in how companies hire, how teams operate, how brands sound, and how good the work actually is at six months, twelve months, and three years in.

What's dividing people isn't the tool. It's whether AI is being used to extend thinking or to replace it.

The first model is AI as cognitive leverage. These are people who already have a point of view. They use AI to pressure-test ideas, explore angles, accelerate drafting, and expand their range. The thinking exists before the tool, with the tool merely amplifying it. You can feel this in the output—there's specificity, there's tension, there's a perspective that could withstand scrutiny without the AI. In this model, speed is a byproduct of clarity.

The second model is AI as a cognitive substitute. Here, the tool isn't extending thought—it's generating it. The starting point isn't a perspective but a prompt. The output becomes the thinking. It looks efficient and reads as coherent, but it lacks weight because the person behind it hasn't done the work of deciding what they actually believe.

AI exposes the difference between people who have something to say and people who have something to generate.

Removing cognitive friction

For most knowledge work, quality was historically downstream of time. Not because spending more time automatically created better work, but because time forced a process: exposure, synthesis, judgment, articulation. You had to sit with inputs long enough to form a point of view and decide what mattered and why.

AI collapses that sequence. The question is no longer "can you produce something coherent?" It's "did you actually think about it?"

The blank page was never just empty space. It was where you worked through your thinking before articulating it, creating a feedback loop between thought and expression. When you eliminate that step, you don't necessarily move faster; you remove one of the key mechanisms through which ideas become yours.

Over time, this changes how people relate to their own thinking.

While AI produces outputs that feel confident and complete, you didn't solve the problem. Instead, you got a polished approximation of a solution. As reliance on AI grows, self-confidence in independent reasoning declines, which further increases reliance on the tool. 

A system that rewards speed

The real shift isn't that companies are using AI. It's that some have built their entire operating culture around it, where the tool sets the standard and everything else, including human judgment, bends to fit. The result is an emerging class of organizations that aren't just AI-enabled but AI-first: outsourcing not just execution but editorial direction to the tool, leaving the humans inside functioning as shepherds of a system that doesn't need them to think.

This matters beyond the org chart because culture determines standards. And when the tool sets the standard, the standard becomes: fast, coherent, and trained on everything that already exists.

LLMs are built on aggregated data. Without strong editorial direction, outputs converge toward existing patterns. This is why so much AI-assisted content feels generic. Not because the tool is limited, but because the thinking guiding it is.

At scale, this creates an authority problem. When everyone has access to the same baseline intelligence, authority shifts from access to information to its interpretation. The value is no longer in producing an answer, but deciding which answers matter and why. That requires taste, context, and lived synthesis, none of which can be fully outsourced.

Which brings you to the paradox sitting at the center of the AI efficiency narrative. The tool that promises to scale your brand is the same one quietly making it indistinguishable. Your brand voice isn't a template. It's the accumulated output of your actual thinking—your specific perspective, your particular obsessions, your earned point of view. You cannot generate that. You can only express it. And when you hand the expression to a system trained on the median of everything that already exists, what comes back is indistinct.

What we’re willing to give up

We talk about efficiency as a positive, celebrating output as evidence of value. Speed is viewed as progress, and friction means a slowdown. But we're beginning to see the downside of that logic.

More content doesn't mean more clarity. Faster production doesn't produce better ideas. And efficiency, applied without editorial judgment, flattens the very things that create distinction.

The deeper question underneath all of this isn't how to use AI; it's what parts of thinking we're willing to give up. Once a system reliably performs a function, it becomes easier to justify offloading it. The risk isn't in offloading just one function, but enough of them that there is no need for human input.

At that point, you're not using AI to extend your thinking. You're using it to avoid it.

The struggle of hard, creative, and strategic work is how people develop a sense of what they're capable of, what they actually believe, and what kind of thinker they are. It's how they build expertise, voice, and perspective. 

When you remove that struggle, you remove the opportunity for that skill to develop. People might be excellent at operating the tools, but become increasingly uncertain about what they think. Organizations full of those people cannot do the things that matter most: navigate genuine ambiguity, make contrarian bets, build something original, or hold a point of view under pressure.

The companies getting this right aren't using AI less than anyone else. The difference is that they are ruthlessly clear about what they're using it for: to compress the distance between a formed idea and a finished output, not replacing the process of forming the idea.

The question isn't whether you should use AI. The question is whether humans are still doing the cognitive work that makes the AI output worth anything or whether the work has quietly become prompt, accept, ship, repeat.