Stop Reintroducing Yourself to AI

My speakers bureau recently suggested I make an FAQ.

Not dramatically suggested. Just reasonably suggested.

Something like, “Jeremy, you should probably have an FAQ on your site. Folks ask a lot of the same questions before events. It would be useful for potential clients.”

Which was true.

And I wanted to do it about as much as I wanted a hole in my head.

I know, I know. An FAQ is not exactly trench warfare. It’s not even hard, really. That was the problem. It was easy enough that I felt like I should do it, and annoying enough that I knew I never actually would do it.

Right after I retire, maybe.

Then I remembered something.

I have transcripts from all these pre-event calls. Every keynote client asks some version of the same questions. And I answer them, out loud, in my own words, over and over again.

Not only did a comprehensive log of questions exist; the answers already existed. My answers.

They were just trapped in the transcripts.

So I asked my AI chief of staff — basically the AI system I use to help with prep, follow-up, and all the work I should do but definitely won’t — a simple question:

Would you read the transcripts from my pre-event calls, identify the ten most frequently asked questions, find my answers in the conversations, and draft an FAQ in my voice?

It read 50 transcripts, found the recurring questions, pulled my actual answers, and wrote the FAQ.

In about three minutes.

It came back with answers I recognized. Like this one, when someone asked what “tool versus teammate” actually means:

“Do you ‘use’ your CMO? Your legal counsel? No — you work with them. The language we use reveals our mindset.”

My AI chief of staff didn’t invent that.

I’d said some version of it on call after call after call. The AI just found the through-line.

That was one of the first moments where I didn’t just think, “AI is useful.”

I thought, “Oh… This is what it feels like to have leverage.”

Not because the AI wrote an FAQ. That’s not the interesting part.

The interesting part is that it knew what kind of answer I would give. It knew the difference between my actual language and generic speaker-bio mush. It knew the job well enough to do the boring part without turning me into a brochure.

I did not need it to be creative. I needed it to be appropriately boring.

(Which, now that I say it, might be an underrated category of AI use.)

That did not happen because I had a magic prompt.

It happened because the AI had been onboarded.

The embarrassing part

There’s a small embarrassment hiding inside all of this.

Real talk? The reason that FAQ sat on my list was not that it was hard.

It was that it was the kind of task I should have already done.

You know the feeling. There’s a version of it on your list too. (I’m sorry. But there is.)

And I think this is what AI is actually surfacing for a lot of us.

The bottleneck is not AI literacy.

It’s self-literacy.

We are underusing AI because we have not done the work of making our own context legible.

The notes I’m talking about are not really AI files. They are answers to questions most of us have quietly avoided:

Who am I?

How do I sound?

What do I refuse?

What do I keep explaining over and over again?

What do I already know that I’ve never bothered to write down?

You can’t onboard a teammate until you’ve answered those.

Most of us never have.

AI is just the first colleague that actually demands it.

(Annoying? Yes. Useful? Also yes.)

The problem with better prompts

A lot of people are still trying to get better at prompting.

That makes sense. Prompting is visible. Prompting feels controllable. Prompting gives us something to tweak.

But prompting from scratch has a ceiling.

Every new chat starts with the same little ritual:

“Here’s who I am…”

“Here’s what I’m working on…”

“Here’s how I want this to sound…”

“Please don’t make it too corporate…”

“Actually, no, not like that…”

You know what’s funny? We would never treat a human teammate this way.

Imagine hiring a chief of staff and refusing to onboard them.

No priorities. No current projects. No examples. No decision rules. No sense of what matters. No warning that you hate phrases like “unlock your potential.”

Then every morning you walk in and say, “Can you help me prepare for this meeting?”

Of course the work is generic.

The person is not bad.

The person is uninformed.

That’s how most people use AI.

They want a chief of staff, but they keep opening a blank chat window and asking a stranger to improvise.

(And then, somehow, the stranger gets blamed for sounding like a stranger.)

Don’t prompt it. Onboard it.

A tool needs instructions.

A teammate needs context.

That’s the shift.

The best AI users I know are not constantly writing more elaborate prompts. They are building reusable context around the work.

Some of that context is obvious: who you are, what you’re working on, who you serve.

Some of it is more personal: how you sound, what you hate, where your standards are unusually high, what kinds of advice make you suspicious.

Some of it is operational: where the real work lives, what the AI is allowed to read, when it should ask questions, when it should just make the call.

I keep this in a simple set of files I call my “Teammate Stack.”

But the files are not the magic.

The magic is the onboarding.

The files only work because they forced me to answer a much better question than “What prompt should I use?”

They forced me to ask myself:

What would this AI need to know if it were joining my team?

That question changes the whole relationship.

It turns prompting from an act of performance into an act of preparation.

It turns the chat window from a vending machine into a working relationship.

(And yes, I know “vending machine” is not the most flattering metaphor. That’s kind of the point.)

This is also why I keep saying AI is a skill, not a pill. The skill is not memorizing clever wording. The skill is learning how to build the conditions for better collaboration.

Which, unfortunately, means noticing the conditions you’ve never bothered to build.

(See above: embarrassing.)

The first teammate is the hard one

Here’s what I’ve noticed.

The first AI teammate is the hard one.

Not because the technology is impossible.

Because the first one forces you to notice yourself.

You have to name what matters. You have to write down your rules. You have to listen for your own voice. You have to admit what you keep explaining. You have to decide what good work looks like before you ask AI to produce it.

That takes effort.

But the second teammate is easier.

A research teammate can inherit your identity, voice, rules, and current context. You only add the research-specific job.

A meeting-prep teammate can inherit the same foundation. You only add the meeting workflow.

A writing teammate can inherit the same foundation. You only add the editorial process.

By the fifth one, you are not starting over anymore.

That is the deeper leverage.

The point is not one better answer.

The point is making every future collaboration cheaper to start and easier to improve.

This is what I mean by “Don’t use AI. Work with it.”

Using AI is opening a blank chat and trying to get a decent answer.

Working with AI is giving it enough context to become a legitimate collaborator.

Try this before your next serious prompt

Before your next important AI session, don’t start by writing a better prompt.

Open a fresh chat. Don’t ask it to produce any output yet. Have it interview you first.

Try this:

I want to onboard you as a teammate before I ask you to help me. Please ask me one question at a time, waiting for my answers in between. Help me create a short working brief that captures who I am, what I’m working on, how I sound, what I hate, and what you should never do. Don’t invent anything. Interview me until you have enough context to draft the brief.

Then answer the questions out loud (with your voice) if you can. (It’s faster, and you’ll say things you would never type.)

At minimum, make sure the interview answers three questions:

1. What would a smart human need to know on day one?

Not your full autobiography. Just the working brief.

Who are you? What are you trying to accomplish? What are the stakes? What does “good” look like?

2. What source material already exists?

Transcripts. Emails. Notes. Memos. Past drafts. Customer questions. Meeting recordings.

Before asking AI to invent, ask it to read.

That was the FAQ move. The answers already existed. I just hadn’t assembled them.

3. What should the AI never do?

This is the part people skip.

Tell it what you hate.

Tell it what sounds fake.

Tell it what risks matter.

Tell it when to challenge you, when to ask, and when to stop polishing and just make the thing useful.

Write those answers down somewhere you can reuse them.

That’s the start.

Not a perfect system. Not a giant operating manual. Just enough context to stop introducing yourself to the same teammate every morning.

The better question

Most people ask, “What prompt should I use?”

I think the better question is:

What would this AI need to know if it were joining my team?

That question changes everything.

It turns prompting into onboarding.

It turns one-off answers into reusable leverage.

And sometimes, if you’re lucky, it turns the FAQ you were absolutely never going to write into a finished draft in three minutes.

I'm putting together a more tactical walkthrough of the exact setup I use — including the Teammate Onboarding Interview, the prompt that builds your own “Teammate Stack” for you. But the principle matters more than the folder.

The goal is not to become a better prompter.

The goal is to stop reintroducing yourself to AI.

Related: It's a Skill, Not a Pill
Related: From Authorship to Stewardship
Related: You're Not Tired of AI. You're Scared of It.

Join over 35,147 creators & leaders who read Methods of the Masters each week.

Next
Next

Don't Use AI. Work With It.