Your AI Dashboard Is Lying to You (And What to Measure Instead)

I've asked Eric Porres to write this week's guest post, and I'm thrilled he said yes.

Eric is the Head of Global AI at Logitech — and a favorite guest from Beyond the Prompt. He's one of those rare leaders who isn't just talking about AI transformation; he's building it, measuring it, and questioning his own assumptions about it in real time. His weekly Substack, Beyond Reason, is one of my favorite regular reads. (If you're not subscribed, fix that.)

Eric's thesis this week is deceptively simple: the thing to measure isn't AI usage. It's deletion. What reports stopped getting written? What workflows collapsed from days to hours? What artifacts just... vanished — because an agent replaced them?

I read that and got chills. Because later this week, we're releasing a Beyond the Prompt conversation with Leidy Klotz, author of Subtract — a book about why humans are cognitively biased toward addition and almost never think to solve problems by removing things. Leidy makes the point that addition is visible. You can see what gets built, what gets shipped, what gets added to the pile. Subtraction is invisible. Nobody gives you credit for the ten things you said no to so you could write the book. You just see the book.

That's exactly Eric's problem, applied to the enterprise. The engineer who experiments with 40 prompts shows up on the dashboard. The finance manager who quietly deletes three recurring reports doesn't — even though she's the one who actually changed.

And if you've been following this newsletter the last few weeks, you'll recognize a third version of the same insight: at the personal level, AI makes it easy to add more work — but the real question is what are you willing to subtract so the things that actually matter get your time?

Three angles. Same blind spot. We're wired to add, and we're terrible at noticing when the right move is to delete.

Eric's piece is one of the most honest accounts I've read of what it actually looks like to build an AI analytics program inside a global company — and then realize the analytics themselves were asking the wrong question. I think you'll find it as clarifying as I did.

— Jeremy

Your AI Dashboard Is Lying to You
(And What to Measure Instead)

TL;DR: A year into building Logitech’s enterprise AI program, I’ve learned that the standard AI analytics — prompts, tokens, monthly active users — are almost completely uncorrelated with whether anything important has changed. The real signal isn’t usage. It’s deletion: which reports stopped being written, which workflows collapsed from days to hours, which artifacts simply vanished because an agent replaced them. This piece walks through the analytics journey we took at Logitech — from sidecar dashboards to behavioral segmentation to what we’re calling “AI Pulse,” a kind of organizational HRV for AI readiness. If you’re staring at a usage dashboard wondering why the numbers look great but the work hasn’t changed, this is for you.

If this sounds familiar: Last May, I wrote about the death of dashboards — the argument that LLM-native analytics would bury traditional BI by letting people ask questions in natural language instead of navigating pre-built views. That piece was about how we query data. This one is about what we should be measuring in the first place. Consider it the sequel: the dashboard isn’t just dying as an interface. It’s dying as a mental model for understanding AI adoption.

The View from 30,000 Feet

The big-picture numbers on AI right now sound reassuring.

McKinsey’s State of AI 2025 report shows generative AI is no longer a side project. Eighty-eight percent of organizations report using AI in at least one business function, up from 78 percent a year earlier. Nearly a quarter say they’re scaling agentic AI somewhere in the enterprise. Wharton’s 2025 AI Adoption Report tells a similar story: 82 percent of enterprise leaders use GenAI at least weekly, nearly half daily, and three out of four say they’re seeing positive returns.

In other words: AI is no longer the experiment. It’s increasingly being treated like infrastructure.

At the same time, MIT and Oak Ridge National Lab released the Iceberg Index — a “digital twin” of the U.S. labor market suggesting current AI systems could theoretically perform tasks equivalent to 11.7 percent of the workforce, or roughly $1.2 trillion in wages.

So on one side: adoption and ROI are up. On the other: AI could already automate a non-trivial slice of the work we do.

And yet, inside most organizations, the lived experience feels much smaller. People are still manually building decks, filling out forms, typing the same updates into three different systems, and sitting in meetings that could have been an agent.

That disconnect has only gotten louder since I first drafted this article in late January. Matt Shumer’s viral essay “Something Big Is Happening” captures the capability side of the gap: AI that builds entire apps, tests its own work, and makes decisions that feel like judgment. Ethan Mollick’s guide to AI in the agentic era makes a point that should keep every analytics team up at night: the same model, in different harnesses, produces radically different outcomes — which means “AI usage” isn’t even a coherent thing to measure anymore.

Because a year into building Logitech’s enterprise AI program, with thousands of employees building internal assistants, coding copilots, and running Gemini inside Workspace, here’s the conclusion I keep coming back to:

AI adoption is not a training problem. It’s a deletion problem.

The models are good enough. People are curious enough. The analytics are definitely fancy enough.

The real bottleneck is much more human and much more boring: who has permission to stop doing the old thing?

Until you can answer that, your AI dashboards are mostly expensive illusion.

What the Studies Say (and What They Don’t)

The McKinsey numbers are worth sitting with for a moment.

Eighty-eight percent adoption sounds like a finish line. It is not. McKinsey defines “AI high performers” as organizations where AI contributes more than 5 percent of EBIT. Only about 6 percent of companies qualify. The other 82 percent are using AI — and barely moving the needle on enterprise economics.

A Harvard Business Review analysis published in February 2026 helps explain why: employees experiment with new applications but don’t integrate them deeply into how work actually gets done. Leaders misread this as an execution gap when it’s really a measurement and behavioral one. People try the applications. They just don’t change the work.

Wharton’s research tracks a similar arc — from exploration to experimentation to what they call “accountable acceleration.” Weekly GenAI usage among leaders has climbed north of 80 percent. Almost half are daily users. Seventy-two percent of firms say they’re formally measuring ROI. And roughly three in four leaders see positive returns.

The report also makes two points that should be in blinking neon. First: people set the pace — usage is mainstream, but readiness still depends on the humans leading and learning inside the organization. Second: capability building lags ambition — the applications race ahead; culture, skills, and process design jog behind, trying not to get dropped.

A Recon Analytics survey of over 120,000 enterprise respondents puts an even finer point on it: only 8.6 percent of companies report having AI agents deployed in production. Fourteen percent are developing agents in pilot form. And 63.7 percent report no formalized AI initiative at all. The “88 percent adoption” story and the “8.6 percent in production” story are both true — and the distance between them is the whole game.

Now add the MIT Iceberg Index. If you treat the U.S. labor market as a simulation of 151 million workers, 32,000 skills, and 900-plus occupations, and ask “given what AI can already do today, how much of this is technically automatable?” — you get 11.7 percent.

Three truths, all at once: AI capabilities are real and accelerating faster than most people realize. Enterprise adoption is mainstream but shallow, with production deployment still rare. And most people’s day-to-day work has not changed nearly as much as those first two facts suggest.

The gap between those truths is where analytics should live. Instead, most AI analytics are busy telling a much less interesting story.

Why “Usage” Is Such a Seductive Lie

I’ve written before about the weirdness of DAU/MAU metrics — how “active” is the only word in that acronym that’s up for debate. If you open an app and immediately close it, congratulations: you are active.

The GenAI version of this is the usage dashboard: number of prompts, number of tokens, number of “monthly active users.” These charts go up and to the right. They look great on a slide. They are also almost completely uncorrelated with whether anything important has changed.

An engineer who experiments with 40 prompts in a day and then never comes back is not the same as a finance manager who quietly moves three recurring reports into an agent and never types them by hand again. Both show up as “usage.” Only one represents deletion.

The problem isn’t that usage metrics exist. The problem is that they’re treated as evidence of transformation when they’re really just evidence of curiosity.

Translation: your dashboard is measuring curiosity. Transformation shows up when artifacts stop existing.

At Logitech, we learned this the hard way.

Inside Logitech: Receipts from Year One

I’ve had “Head of Global AI” on my badge since January 2025. Before that, I was the guy running experiments on nights and weekends, trying to figure out how to make AI earn its keep.

A year in, we now have a full AI analytics warehouse covering our 6,000-plus workforce, built almost entirely by AI. Gemini and Claude wrote and rewrote most of the 3,500+ lines of Google Apps Script that power it, across 257 deployments and seven underlying sheets. My job was to ask better questions, wire the systems together, and keep deleting anything that wasn’t giving us real signal. AI wrote it; this human reviewed, tested, and owned the outcome.

What we ended up with is eight views on the same story: a summary layer in plain language, a leadership lens for any exec, a geography lens by country, time-series views of MAU and token consumption, a Champions-vs-everyone cut proving our champion network uses roughly twice the tokens and is nearly twice as likely to build assistants, a training ROI view, an assistant builders view surfacing the 1,100-plus people publishing their own AI teammates, and underneath it all, an enriched user table — 6,000+ humans by 50-plus attributes — that lets us answer ad-hoc questions without rebuilding from scratch.

This infrastructure turned “we think adoption is going well” into statements like: “Gemini is now used monthly by over 95 percent of core employees.” “Roughly three out of four people are active on LogiQ, and 70-plus percent are using both platforms.” “The top 20 percent of users generate more than 80 percent of the activity.” “Over 1,100 people have created custom assistants — about an 18 percent builder rate in a hardware-centric company.”

The warehouse didn’t magically solve adoption. What it did was give us a clear, shared mirror. From there, our analytics journey unfolded in three stages.

Stage 1: Sidecar analytics — curiosity in high definition

The first thing we did was simple: shine a light. We mapped LogiQ and Gemini usage across the org chart, countries, and leadership teams. We built sunbursts where each wedge was a team and each color represented an engagement tier. We plotted ridgelines showing how token usage distributions were shifting over time.

This was sidecar analytics: it told us where curiosity lived. It made it obvious that some orgs had quietly become AI-native while others were still warming up. It gave leaders something more honest than “my sense is...”

But it was still fundamentally about who is experimenting — not who is changing how they work.

Stage 2: Behavior over volume — who’s actually changing?

Next, we stopped congratulating ourselves for big MAU numbers and started asking different questions. Are people using AI once or twice a month, or several times a week? Is their activity all in chat, or are they using AI where the real work happens — in Docs, Sheets, Slides, Gmail, code, tickets? Are they only consumers, or are they creating and sharing their own assistants?

We shifted from raw counts to engagement profiles: inactive, dabblers, regulars, power users, and the small cohort of super-engaged people who had rebuilt chunks of their job around AI.

That’s when the middle layer problem came into focus. Frontline ICs were curious. Senior leaders were enthusiastic. The manager band — the people living inside KPIs, templates, compliance anxiety, and career risk — showed lots of sporadic experimentation but very little systematic redesign.

The warehouse let us see that pattern. It didn’t fix it.

Stage 3: Outcomes and deletion — what disappeared?

The real shift came when we stopped asking “where are people using AI?” and started asking “where did work actually disappear?”

Every time we made an agent the front door to a workflow and retired the old path, the data looked different. Manual reports stopped showing up because they were never created — an agent generated them instead. Cycle times for certain approvals collapsed from days to hours. Token usage went up where it mattered, while keystrokes in legacy applications quietly went down.

In the dashboards, that showed up as fewer manual touches, shorter timelines, and the absence of artifacts that had previously appeared like clockwork. In real life, it showed up as teams saying: “I can’t believe we used to do this by hand.”

The common ingredient in every successful case was not a better model or a clever prompt. It was a leader willing to say: “This letter now starts in an agent; you only review.” “This deck is dead. The agent produces the analysis; we talk about decisions.” “You will be evaluated on how intelligently you use AI, not how heroically you grind.”

That is the moment when analytics stops being a set of charts and starts being receipts for deletion.

Garbage Cans, Bitter Lessons, and Why Outputs Matter

Ethan Mollick uses a helpful metaphor for how organizations usually approach AI: the “Garbage Can” problem. Most companies respond to AI by trying to map every messy internal process before they do anything. They tumble around in the garbage can of meetings, checklists, handoffs, and informal Slack rituals, hoping to clean it up enough for automation.

The alternative, drawn from the “Bitter Lesson” in AI research, is to focus less on the process and more on the outputs. Instead of untangling every internal path to a sales report, you define what a good sales report looks like, feed AI enough examples, and let it find its own route through the chaos.

That framing pairs with the deletion problem. If you obsess over inputs — the fields, the forms, the legacy workflows — you build better tools to do the old work faster, and you feel mysteriously underwhelmed by the impact. If you obsess over outputs — “this is what good looks like” — you create the conditions for something more uncomfortable and more interesting: you can look at an entire category of work and ask, “If we can reliably get this output from an agent, what can we delete?”

Not just steps. Whole artifacts. Whole rituals.

The 2026 Shift: From Dashboards to an AI “HRV”

So what does all of this look like when you wire it into a real analytics warehouse instead of a slide?

At Logitech, the warehouse is already doing the heavy lifting: one enriched row per person, joined on email; signals from everywhere AI shows up in their day — LogiQ, Gemini, copilots, training, assistant creation; org context wrapped around it all. That’s the raw material. The next step is turning it into something people can feel and act on, not just admire in a dashboard.

That’s what AI Pulse is meant to be.

If the last few years were about building visibility, Pulse is about building a kind of AI heart-rate variability for the organization. HRV isn’t just “higher is better.” It’s a composite signal about how your nervous system is coping. It’s personal, it changes with context, and the real value is in the trend, not the absolute number.

Pulse is the AI version of that. It takes the exhaust from the warehouse and blends it into six dimensions:

  • Consistency — Are you touching AI often enough that it becomes muscle memory rather than a once-a-month stunt?

  • Intensity — When you use AI, are you pushing real work through it, or just editing subject lines?

  • Creation — Are you only consuming, or are you publishing assistants and templates that live beyond you?

  • Impact — Does any of that created work get reused, or does it die in your drafts folder?

  • Breadth — Are you stuck in chat, or are you bringing AI into Docs, Sheets, Slides, Gmail, tickets, code — the places your work actually lives?

  • Skills floor — Have you done at least the basic training so you’re not reinventing prompt engineering from first principles?

Each person ends up with a Pulse that reflects how they work with AI, not just how often they open a bot. And like HRV, the goal isn’t to chase someone else’s number — it’s to watch your own pattern move in the right direction.

Because the warehouse already knows who you are, where you sit, and which applications you touch, Pulse can do three things a static dashboard never will. For an individual, it can say: “You’re consistent but shallow — next month, move one real workflow into AI.” For a manager: “Your team’s Pulse shows lots of dabbling, almost no creation; similar teams have already retired three manual reports. Which one do you want to kill first?” For the C-suite, it becomes a live map of where the organization is learning, where it’s stuck, and where there’s obvious slack.

The warehouse tells you what happened and where. Pulse translates that into who needs what next. The warehouse is your lab notebook. Pulse is the coach.

The Sequence That Actually Works

If you’re building your own AI analytics journey, here’s the sequence. You don’t need to copy our stack. You do need to copy the order.

Build a single, honest picture. Join all your AI exhaust into one place — internal assistants, copilots, Workspace features, training completions. One row per human, updated regularly, enriched with org chart data. Until you can see behavior in context, you’re guessing.

Move beyond “active.” Stop reporting MAU as if it means anything on its own. Define a small set of behavioral segments — inactive, dabbler, regular, power, super-engaged — based on frequency and breadth, not raw volume. This will instantly change leader conversations from “how many” to “what kind.”

Highlight the manager band. Slice your data by reporting line. You will almost certainly find what we did: curious frontline employees, enthusiastic senior leaders, and a stressed, skeptical, overloaded middle that has the hard job of reconciling AI with KPIs, compliance, and their own career risk. Build views that make that visible — not to blame, but to focus support.

Tie analytics to deletion candidates. Make a list of work you’d happily never see again — recurring decks, status emails, log-into-three-systems keystroke rituals. For each, ask two questions: could an agent reasonably handle it? And who has authority to say the old version is done? Instrument those pilot areas differently. Measure cycle time before and after. Measure the presence or absence of old artifacts. Measure how often people bail out of the agent path and go back to manual.

Turn dashboards into conversations. Your goal is not a perfect set of charts. Your goal is a series of uncomfortable but productive conversations: “Why does this team still look like 2022 while their neighbor looks like 2026?” “What would it take for you to trust the agent-first version?” “How will we know, three months from now, that this wasn’t just another demo?”

Layer on coaching. Once you have even a rough version of Pulse, use it to offer specific next steps, not just scores. For individuals: “Convert one recurring task into an agent this month.” For managers: “Here are three workflows where your peers have already gone agent-first. Which one do you want to tackle this quarter?” This is where analytics finally earn their keep — when they change what people do next Monday, not just what they nod along to in Friday’s town hall.

The Uncomfortable Mirror

It’s tempting to blame AI’s slow organizational impact on the technology. The models hallucinate. The interfaces are new. The vendors are noisy. The policy landscape is confusing. All real.

But the more time I spend staring at our own data, the more convinced I am that the core problem is simpler and closer to home: we are very good at adding new tools. We are terrible at deleting old work.

Sure, this is change management. The same way the iPhone was “just a phone.” The interface changed, and the work changed with it.

McKinsey’s latest data shows 88 percent of organizations using AI regularly — and only 6 percent translating that into meaningful earnings impact. HBR confirms what we’ve seen firsthand: people experiment, but they don’t integrate. The tools get adopted; the work doesn’t change.

MIT’s digital twin shows that, technically, we could automate a meaningful chunk of the labor market today.

The bridge between those realities is not another model, another pilot, or another training course. It’s the willingness of leaders — starting with the messy, overloaded middle — to say: “This report is gone.” “This deck is dead.” “This workflow is now agent-first, and we will judge you on how well you manage that agent, not on how fast you can copy and paste.”

Your AI dashboard will not do that for you.

But if you design it right, it will stop letting you pretend you already have.

Sources:

If this was useful, share it with the person in your org who owns the AI dashboard. They need to hear this before the next quarterly review.

Related: Eric Porres on Beyond the Prompt
Related: The Rising Opportunity Cost of Being Human

Join over 31,147 creators & leaders who read Methods of the Masters each week

Next
Next

The Rising Opportunity Cost of Being Human