Real Concerns, Wrong Conclusion
I keep getting the same three questions in post-keynote Q&A sessions:
What about cognitive offloading?
What about homogenization?
What about sycophancy?
Great questions. Important questions. Questions any AI training program worth a its salt should be able to answer.
But here's what I've started noticing (and what's actually prompting this post): the people raising them aren't asking. They're concluding. The questions arrive packaged as reasons to be skeptical of AI — when really, they're reasons to be skeptical of uneducated AI usage. Different problem. Different solution.
I'm calling this a smoke screen — real concerns, deployed as a reason to stay still.
This Pattern Has a History
Two years ago, I gave one of the most intimidating keynotes of my career: the Stanford Faculty Club. Every person in the room was a world-renowned expert in something, which made the post-keynote Q&A something of a high-wire act. (It was probably the highest concentration of "I don't know"s I've ever had the pleasure of admitting.)
What hit me afterward wasn't any single question — most were genuinely thoughtful. It was the pattern behind them. Many faculty members were treating their legitimate concerns as a reason to wait. I wrote about it at the time and called the dynamic what it was: a scapegoat. A convenient way to justify inaction without admitting the underlying discomfort.
Two years later, the pattern hasn't gone away. It's gotten more articulate. The concerns now have names from the academic literature — cognitive offloading (offloading mental work to a tool until you can't do it yourself), homogenization (everyone's outputs starting to sound the same), sycophancy (the model tells you what you want to hear) — and they sound less like fear and more like rigor. (Better vocabulary. Same posture.)
But the structure is identical. And it's worth saying directly: a real concern is not a valid conclusion. Just because something is a problem doesn't mean opting out is the answer.
"Is This Still a Concern?"
Last week I got an email from a guy I'll call Alex — an EVP and CFO in the finance industry. He'd been at a conference where I was a keynote speaker, and another speaker said something that concerned with him: AI will tell you what it thinks you want to hear. Even if you ask it for the truth, it'll find a way to say the same thing in a different way.
Alex emailed me to ask:
Is this still a concern? If so, is there a way to distinguish between the facts and what we want to hear?
That's a beautiful question. Not because the answer is novel — sycophancy is a well-documented phenomenon and there are great research papers on it — but because of the posture the question reveals. Alex isn't using sycophancy as evidence to abstain. He's using it as a problem to solve. How do I get good at this thing, given that this is a known issue?
That's what an educated relationship with AI looks like.
Here's the response I sent him (lightly edited):
Alex, thanks for the insightful question. The short answer is yes — by default, AI wants to be helpful, which it generally defines as being nice. It's incumbent on you and me, as the human operators, to redefine "helpful” to mean willing to poke holes in arguments and point out blind spots.
The sad truth is most humans don't actually want to be told what's wrong with their thinking, so AI sycophancy is just delivering on the widely accepted definition of helpfulness. If you, like me, are in the minority who prefers a direct assistant — one that holds no punches — then yes, you have to teach it. You have to tell it that's how you define helpful.
We'll get into more issues like this over the next three weeks. (Alex is taking my 3-week AI Bootcamp)
Keep the questions coming, and I look forward to continuing to learn together.
Notice what isn't in that response: a defense of AI. An apology for sycophancy. A reassurance that everything's fine. The concern is real, and the answer is real. The answer is that you have to do the work to redefine what "helpful" means. That's the curriculum. The question is the syllabus.
Cars (and Why Nobody Goes Back to Bikes)
Here's how I've started thinking about this whole category of question. The more powerful the tool, the more seriously we take the training. (I know, I know — to a teacher, everything is a curriculum problem. Guilty.)
Cars are dangerous. Roughly forty thousand people die in car accidents in the US every year. Driving while distracted, while drunk, while tired — careless driving genuinely kills people. It is, by any reasonable measure, an enormous responsibility.
And yet — quick gut check — would you give yours up? Walk to your next meeting? Bike to the airport? (For any kind of distance, basically nobody opts for the bike. Even people who love biking don't pretend it's a substitute.) The power isn't a bug. It's the whole point. Nobody wants to go backwards.
So we don't ban cars. We do drivers ed.
I want to be very specific about what drivers ed actually is, because this is where the analogy starts to do real work for the AI argument: drivers ed isn't waiting until you've passed a test before you drive. Drivers ed is driving — with a learner's permit, with feedback, in increasingly demanding conditions, until you can do it unsupervised. The training and the driving are the same activity.
Same with AI. Experimentation isn't the opposite of training; experimentation, with feedback, is training. The mistake isn't "they're trying stuff before they're ready." The mistake is using legitimate concerns as a reason not to engage at all — when those very concerns are exactly what good training addresses.
If your AI rollout consists of buying licenses and saying "go forth and prompt," you haven't done a rollout. You've done a procurement. You've handed out keys without ever offering a learner's permit. (It's a skill, not a pill — I've made this argument before, and I'll keep making it.)
What To Do With This
A few action items, depending on which side of this you're on. (I know, I know — to a teacher, everything is a curriculum problem. Guilty.)
1. If you're a leader who has bought licenses for your team — your job hasn't ended; it's begun.
The license was the easy part. The hard part is what comes next: how do people get the equivalent of drivers ed for AI? That means designated time, structured exposure to the actual issues (sycophancy, offloading, homogenization — yes, all of them), and ongoing feedback loops. Not a one-hour webinar. Not a Slack channel. Recurring, structured practice with someone who can tell you when you're drifting. If your IT team had handed out laptops in 2002 and called that a digital strategy, you'd have laughed. Same logic applies here.
2. If you're someone who hears yourself raising one of these concerns — listen to which question you're actually asking.
"What about sycophancy?" can mean two different things. It can mean "This is a real issue, and I want to learn how to manage it" — that's Alex's question. Or it can mean "This is a real issue, therefore I'm justified in not engaging" — that's the smoke screen. They sound identical from the outside. From the inside, you know which one you're asking. The honest version is the one that points toward the syllabus, not the exit.
3. Find an Alex-shaped feedback loop.
The reason Alex's question landed so well is that he's already in a structured learning relationship — specifically, my three-week AI Bootcamp. (That's the "next three weeks" my response refers to.) He has recurring exposure to exactly these kinds of issues. He's doing the reps. If you don't have that, build it. A weekly call. A community of practice. A daily drill. (I run a few of these — but more importantly, find the one that fits your context. The form matters less than the recurrence.)
Keep the Questions Coming
I closed my response to Alex with a line I meant sincerely: Keep the questions coming.
Because the questions aren't the problem. The questions are great. They're the only honest way through this.
The problem is what some people are doing with the questions — using them as ground cover to stay on the sidelines. The concerns are real. The danger isn't where they're pointing. (And in case I haven't been blunt enough: the actual danger is the cost of staying on the sidelines while the people willing to do the reps quietly pull away — and the gap compounds.)
If you're asking the questions to learn, you're already through the smoke. Keep going.
If you're asking them to stay still — that's a real concern. Just a different one than the one you've been asking about.
Related: Embrace AI Despite Uncertainty
Related: It's a Skill, Not a Pill
Join over 35,147 creators & leaders who read Methods of the Masters each week.