From Authorship To Stewardship

I've asked Brice Challamel to write this week's guest post, and honestly, I've been looking forward to this one for a while.

Brice is one of my favorite guests on Beyond the Prompt. When he came on, he was head of AI products at Moderna, where he led one of the most ambitious AI transformation programs in the enterprise world. Since then, he's joined the C-suite at OpenAI as head of AI strategy and adoption — a remarkable promotion, and a validation of his standing as one of the most credible voices in AI-driven transformation today.

There's a line from our first podcast conversation (spoiler: there’s another interview with Brice coming soon…) that genuinely changed how I think. Brice said: "There are no more individual contributors at Moderna; now everyone is a team of five themselves — their AI assistant, their AI coach, their AI expert, and their AI creative partner." That paradigm shift — from solo contributor to leader of a personal AI team — has been so influential that Brice ended up creating a LinkedIn Learning course inspired by it… it might feature a cameo appearance by yours truly… From Tool to Teammates: Working Smarter with AI (LinkedIn Learning).

Brice's thesis this week is one of those ideas that sounds obvious once you hear it, but reframes everything: the shift from authorship to stewardship. In an AI-augmented world, the professional's job is no longer to produce the artifact. It's to steward the outcome — to set direction, exercise judgment, ensure quality, and take responsibility for what gets shipped.

I've been chewing on that word — stewardship — all week. And here's why: it actually came up in our podcast conversation with Greg Shove this week, which we're releasing alongside this piece. Henrik and I were debating with Greg about what's really required of professionals in this moment, and the word stewardship surfaced. Greg pushed back on it a bit. Henrik and I pushed back on the pushback. It's a word that keeps demanding my attention.

And I think Brice's piece clarifies why. Stewardship is the answer to the question that's been running underneath so much of what we've explored in this newsletter: if AI can draft the memo, build the deck, generate the analysis, and prototype the product — what exactly is the professional's contribution? Brice's answer isn't "less." It's "different." You're no longer the author. You're the steward. And stewardship, it turns out, is harder. It requires more judgment, not less. More taste, not less. More ownership, not less.

If you've been following the Team of Five idea from our earlier conversations — this is the next chapter. The Team of Five tells you who's on your team. Stewardship tells you what your role is.

You can find more of Brice's thinking in his LinkedIn newsletter, Power of Why — well worth a follow.
— Jeremy

From Authorship to Stewardship

A few days ago, Jeremy invited me to contribute to Methods of the Masters.

I have been reading his newsletter for a long time, and admire its mix of humility, humor, self-awareness, and imagination. Jeremy has a unique way of making AI relatable, and it always starts with a personal story. So here’s mine, in the grand tradition!

I was recently reviewing the transcripts of a series of meetings we had with my team about the organization of Champion Networks in large organizations. And, as I went from summaries to insights and back to summaries, suddenly I wasn’t so sure anymore about what was said during the meetings… Even though I had attended each and every one of them! 

In that moment, I felt both the power and the challenge of operating at a different level of abstraction, even with simple material like meeting transcripts. AI models help us think, draft, synthesize, critique, research, and execute. They are becoming part of the working fabric for us and around us.

And as I reflect about it, the core transition I keep coming back to is the culture shift from authorship to stewardship. 

That’s the change in knowledge work where value is increasingly tied to guiding an AI-assisted creation process, judging the result, and owning the outcome with clarity and confidence, rather than personally reading or writing every word. And as my experience tells me, it requires practice and dedication.

That idea kept growing as I reflected on Jeremy’s own work about the move from tools to teammates, and this article felt like a unique opportunity to explore it.

How can we name, understand and guide this massive culture change for knowledge workers, and for their organizations? 

The first signs are already here

To better understand it, let’s start with something very ordinary.

Think about the last week of work. We already skim less and delegate more. We ask AI to summarize long email threads, read dense reports, identify the relevant files in a folder or an internal site, compare documents, or brief us on a meeting based on past transcripts. In the same way, we no longer draft everything from scratch. We ask AI to help us structure a memo, sharpen an argument, stress-test a recommendation, improve a slide, rewrite an opening, or surface missing questions. And if you’re like me, we ask AI to draft almost every email answer, either because it’s so simple that we don’t care to spend ten minutes doing it or because it’s so important that we want to make sure we’ve used the appropriate language and structure!

That doesn’t remove us from the reality of work. But it does significantly change our role in the work and how we think of it.

The traditional picture of knowledge work was simple. We read the material. We wrote the document. We built the asset. Our ownership came in part from direct authorship. We had an intimate knowledge of the content because we produced each tiny fraction of it ourselves, with painstaking effort and craft in some cases. 

I’ll readily admit that before AI, I could sometimes spend an hour writing an important email, and even more time designing and organizing a slide to express and document an important concept.

That picture is starting to change. Even writing this, I feel this was not only a long time ago, but a different time altogether.

As AI becomes more present in reading, synthesis, drafting, and review of work, our productivity increases but value is less tied to touching every word ourselves. It is more tied to guiding the process well, judging the result well, and standing behind the outcome with clarity and integrity as we embed them in our individual and collective processes.

This is what I mean by stewardship.

What stewardship actually means

Stewardship is not a softer form of responsibility. In many ways, it has already been demonstrated to be a harder one that challenges work conventions and can inhibit critical thinking when it is most needed. 

If I sign my name to something because I wrote every line myself, the source of ownership is easy to explain and if someone questions a paragraph or a sentence, I know where it came from and why I wrote it this way.

If I sign my name to something that was shaped through a mix of human judgment and AI contribution, ownership has to come from somewhere else. It comes from how thoughtful I was before I even started considering the production itself. How I framed the topic, searched the supporting evidence, crafted the initial prompts, all of which I probably did with AI to begin with. And then the way I iterated, expanded, nudged towards completion, and then  reviewed with discernment, standards, and accountability.

A good editor doesn’t need to have written a manuscript to be accountable for what gets published. A construction leader doesn’t need to have laid every brick of every wall to be responsible for the safety of the building.

The same logic is now making its way to knowledge work.

We are becoming custodians of outputs that are researched, drafted, expanded, checked, and refined with help from systems we supervise rather than manually produce. The responsibility remains ours. The mechanism of that responsibility is changing and so is the perception.

This is happening at the individual level, but also at the level of teams and organizations. Which means we need to redefine standards of accountability at each of these three levels of work.

And once work is produced through a combination of people and models, a new set of questions appears. How do we recognize contribution? How do we review quality? How do we decide what needs human oversight and what can be trusted at scale? How do we preserve accountability when the production chain is more distributed than before?

These are stewardship questions, and they’ve been there for decades of human progress in automation. I don’t have perfect answers for all of them, and sometimes I’m not even sure I have good ones but I know for certain that they will need answers in the coming months and years. And not only the type of answers that Jeremy and I can come up with around a cup of coffee, but the type that has been debated, deconstructed and pressure tested by large organizations, reviewed by top consulting firms, affirmed by academia and accepted by society.

In the meantime, let’s face it: we’re stuck in the middle of change.

Navigating the middle of the river

Most organizations are now past the first phase of excitement. We understand that AI can generate text, images, code, summaries, analyses, and recommendations. We know it can save time and change the very nature of work. We marveled at it for a while, and now we’re getting used to it. The novelty is no longer the main story.

But we are not yet in a mature state either. Very few organizations have fully integrated AI into their workflows with the right combination of trust, security, governance, review practices, role clarity, and value measurement. 

Fewer still have mapped the use cases on a functional and line of business level to the capabilities of AI. We are only starting to build the basic conditions for AI to be an accepted, standardized and trustworthy component of work.

So we are in between. And as we are crossing this river of change, we feel the bluntness of the effort the most. Because between both shores is where the current is the strongest, the water is the deepest and where our understanding of where we are is the most challenged.

That middle is uncomfortable because the old norms are weakening faster than the new ones are stabilizing. Many people feel this in a very practical way. They know the work has changed. They know they are expected to adapt. But they are not always sure what good looks like yet.

That uncertainty has a human cost and requires leaders to shift gear, and take this change very seriously.

Some people feel left behind. Some feel exposed. Some feel that the quality of their work is improving while their confidence has degraded. Recent research indicates that while AI significantly boosts objective performance for those who already embrace it, it simultaneously creates a "confidence gap" where users feel less certain about their work. 

That is part of why this transition deserves more empathy than it usually gets. We are not just adopting a tool. We are changing our relationship to reading, writing, contribution, and ownership. And this means we are evolving in our relationship with ourselves as well.

A practical way to lead this transition

I think there are three parts to leading this transition well. They are not the entire story, but without them there is simply no story that’s going to work for people and organizations.

1. Name the pattern

The first step is to say clearly what is happening and find new, appropriate words to explain and understand it. This is, in part, the object of this article.

If we are moving from authorship to stewardship, we need language for it. Without language, the change stays disjointed and confusing. With language, it becomes discussable.

Naming the shift helps in two ways.

First, it opens the door to a real conversation about what is changing in the work itself. Second, it lowers the stigma. People stop feeling that they are individually failing to keep up with a hidden standard. They can see that the standard itself is evolving and that this change is upon everyone, not just them.

For example, the weekly team meeting could now start with, "What have you supervised with AI this week?" instead of, "What have you done this week?"

This matters more than it sounds. When work improves but the sense of ownership becomes fuzzier, people need a way to make sense of that gap. Naming the change helps.

2. Build the review layers

The second step is to adapt the ecosystem around the work.

Take code as a simple example. Once code can be written by humans, by models, or by both, authorship stops being the main operational question. The main question becomes whether the code is reliable, secure, maintainable, and fit for purpose.

The same thing is starting to happen in many other workflows. Reports, memos, proposals, research summaries, internal knowledge bases, customer support drafts, policy documents, and analysis decks all need review systems that focus on outcome quality rather than authorship mythology.

That means we will need layered reviews. At first, those layers will combine human review and agentic review. Dedicated systems can check for policy issues, factual inconsistency, formatting, missing elements, code security, tone problems, or confidence signals. Humans can then focus more on exceptions, ambiguous cases, and high-stakes judgment.

Over time, this becomes a new operating habit. Example: Every memo could now be tagged with a "Human Oversight" level: L1 (Trusted AI output), L2 (Fact-checked by Human), L3 (High-stakes judgment required). 

We stop asking only, “Who wrote this?” and start asking, “How was it reviewed, what standards were applied, where is confidence high, and where does human judgment still need to step in?”

That is stewardship in practice.

3. Bring everyone into the transition

The third and last step is to make sure this does not become a fractured cultural experience.

In many organizations today, AI proficiency is very uneven. That’s normal. Some people experiment every day and have constant “wow” moments that they can’t wait to share with their peers and friends. Others are curious but hesitant. Others either lack access, training, or confidence and are mostly still in the starting blocks.

The problem is not that people move at different speeds, it’s always the case with change. The problem is when that difference becomes a social divide.

If a growing part of work depends on knowing how to brief an AI system, review its output, iterate with it, and judge its reliability, then people without that fluency are not simply slower. They risk being pushed to the margins of collaboration.

That is bad for performance. It is also bad for belonging.

So the goal is not only to provide access. It is to build shared literacy. For example by rolling out AI stewardship guides for everyone instead of just prompting best practices and general "AI Training."

Teams need common language, common examples, shared practice, and time to compare notes. Managers need to model how they themselves use these systems, where they trust them, where they do not, and how they review the results.

That is how a work culture crosses together.

The Other Shore

These three practical steps are how we build the bridge to the next era, an era defined by a new professional virtue: stewardship.

And there is a deeper reason I keep thinking about this shift.

Authorship has always carried dignity. There is something deeply human in making things with our own minds and hands. That doesn’t and shouldn’t disappear. But stewardship may become one of the defining civic and professional virtues of the AI era.

Because if AI gives us more drafts, more inputs, more synthesis, more options, and more speed, then our role becomes more centered on curation, judgment, care, and responsibility. We still create and we also supervise creation at a scale and pace that used to be impossible.

That asks more of us, not less. It asks for intention, taste, discipline and new standards in honesty. The ability to review well. The ability to say yes with confidence and no with reasons. The ability to hold a line for quality even when production becomes easier.

That is a serious job and a meaningful one. It’s also a promising job, and I see a bright horizon on the other side of this transition.

Related: It’s A Skill, Not A Pill
Related: Beyond the Prompt: Brice Challamel

Join over 32,147 creators & leaders who read Methods of the Masters each week.

Previous
Previous

Not Behind Enough to Panic. Too Afraid to Learn.

Next
Next

It's a Skill, Not a Pill