A senior engineering manager has three AI tools open. One is weighing technical decisions. Another is spitting out drafts and summaries. A third is generating code. He keeps bouncing between them, double-checking every output, cross-referencing every recommendation.
By 2 p.m., his brain feels like it has a dozen browser tabs open, all fighting for attention. He's not physically tired. He's cognitively fried.
Sound familiar? If you're leading a team that's adopted AI tools in the past year, there's a good chance your best people are quietly experiencing the same thing.
Researchers at Boston Consulting Group recently put a name to it: "AI brain fry." In a study of 1,488 full-time U.S. workers published in Harvard Business Review this March, they defined it as mental fatigue from excessive use or oversight of AI tools beyond one's cognitive capacity.
Participants described a "buzzing" feeling, a mental fog, difficulty focusing, slower decision-making, and headaches. Fourteen percent of workers using AI endorsed the experience directly. In marketing departments, that number hit 26%.
Here's the thing: AI brain fry isn't a reason to slow down AI adoption. It's a reason to get smarter about how you adopt it. The difference between organizations thriving with AI and those burning out their best people comes down to design: how work is structured, how tools are deployed, and how much cognitive load lands on any single person.
Oversight, Not Tools, is What Exhausts People
The researchers examined the full range of how workers engage with AI: number of tools used at once, whether AI replaces work or augments it, level of oversight required, and whether AI increased or decreased overall workload. What they found cuts against the more-AI-equals-more-productivity narrative.
The most mentally taxing form of AI engagement was oversight. Workers with high oversight demands expended 14% more mental effort, experienced 12% more mental fatigue, and reported 19% greater information overload than those with low oversight demands. The relationship between tool count and productivity was equally telling: going from one AI tool to two boosted productivity. Three still showed gains, but at a slower rate. After three? Productivity declined.
And here's where the findings get counterintuitive. AI brain fry is not burnout. Burnout is a chronic state of emotional exhaustion that builds over months. AI brain fry is something else — an acute cognitive strain from pushing attention, working memory, and executive control past their limits. It's more like a mental hangover than a slow erosion.
The BCG team found that when AI was used to replace routine tasks, burnout scores actually dropped by 15%. Workers reported higher engagement and stronger connections with colleagues. But mental fatigue didn't follow the same pattern. Oversight-heavy AI work drove more mental strain regardless of whether it reduced the drudge work.
The business costs are real. Workers experiencing brain fry reported 33% more decision fatigue. They scored 11% higher on minor errors and 39% higher on major errors. One 2018 study estimated the cost of suboptimal decision-making at a $5 billion revenue firm at $150 million per year — and a 33% jump in decision fatigue would push that number significantly higher.
There's a retention cost, too: 34% of workers experiencing brain fry showed active intent to leave, compared to 25% of those who didn't. That's a 39% increase in turnover risk concentrated among a company's most intensive AI users.
These aren't edge cases. They're the people organizations are counting on most.
Beyond Cognitive Load: AI is Quietly Eroding Professional Confidence
There's another layer to this that surveys and cognitive metrics miss entirely.
In our work with clients, we hear something that goes beyond information overload: a quiet erosion of professional confidence. Employees describe feeling guilty for not producing more after AI "saved them time," as if the freed-up hours created a debt they now owe. They feel uneasy presenting AI-assisted work, unsure how much credit to claim and how much to disclaim.
Some mourn the craft skills that used to define their value: the careful research process, the editorial instinct, the analytical rigor that took years to develop and now gets compressed into a prompt.
These aren't just emotional responses. They're signals that AI adoption is outpacing the psychological support structures around it. And they matter for business outcomes. An employee who doubts their own judgment is an employee who second-guesses decisions, over-relies on AI validation loops, and, ironically, increases the very oversight load that causes brain fry in the first place.
Most of the conversation around AI brain fry focuses on cognitive mechanics: working memory, attention, decision fatigue. Almost nobody is talking about this emotional undertow. But it's real, and it compounds everything the BCG study measured.
The Science Has Been Warning Us
The BCG findings didn't emerge in a vacuum. They sit on top of three decades of research about how human brains handle information overload.
We've seen this before. In 1996, psychologist David Lewis coined the term "Information Fatigue Syndrome" in a study commissioned by Reuters. He found that 67% of workers reported their professional and personal lives suffering from information overload stress — paralysis of analytical capacity, increased anxiety, eroding decision-making confidence.
By 2007, researchers Tarafdar and Ragu-Nathan had formalized five "technostressors" that emerge when new technology outpaces human capacity to absorb it: techno-overload, techno-complexity, techno-invasion, techno-insecurity, and techno-uncertainty. AI brain fry maps onto at least three. The difference is the speed and scale at which AI amplifies them.
Start with cognitive load theory, developed by psychologist John Sweller in the late 1980s. The core insight is simple: working memory has a hard ceiling. When the volume of information we're asked to process exceeds that ceiling, performance doesn't just dip — it collapses. Accuracy drops. Creativity stalls. Errors multiply. The brain wasn't built to monitor multiple AI agents, evaluate their outputs, and make strategic decisions simultaneously.
Then there's what happens when we switch between those tools. Sophie Leroy's 2009 research on "attention residue" showed that when people shift from one task to another, part of their attention literally stays behind — clinging to the unfinished task, quietly consuming processing power. This explains the BCG productivity cliff after three tools: each additional tool isn't just one more thing to manage. It's one more source of residue, one more drain on the finite attention you need for work that actually matters.
Roy Baumeister's research on decision fatigue adds another layer. Our capacity for high-quality decision-making degrades with every decision we make. The cognitive muscle responsible for discernment gets tired. The 33% increase in decision fatigue among brain-fried workers maps directly onto this.
The broader workforce data confirms the pattern is accelerating. Microsoft's 2025 Work Trend Index found that 80% of the global workforce lacks the time or energy to do their jobs, and nearly half say their work feels "chaotic and fragmented." ActivTrak's 2026 State of the Workplace data shows the average focused work session has shrunk to just 13 minutes and 7 seconds, down 9% since 2023. AI isn't the only factor, but it's pouring fuel on a fire that was already burning.
Related Resource
Maximizing AI ROI and Ensuring Long-Term Scalability: A Roadmap for Businesses
What Leaders Can Do: A Change Management Playbook
To be clear: the lesson is not to slow down AI adoption. It's to design better human and AI workflows. The BCG study identified what helps, and the levers aren't mysterious — they're the same ones good change management has always relied on: clear communication, thoughtful role design, skill development, and cultural signals that match reality. The difference is that this time, the "change" isn't a one-time system migration. It's an ongoing recalibration of how humans and machines share cognitive work.
Define the limits of human and AI responsibility. Just as organizations set spans of control for managers, they need to set spans of oversight for AI. The BCG data is unambiguous here: productivity gains flatten and then reverse after three simultaneous AI tools. That's not a guideline to wave away. It's a design constraint. Teams should audit how many AI tools each role is expected to monitor and set explicit boundaries. When teams embed AI deeply into shared workflows rather than stacking it on individual contributors, cognitive burden drops measurably.
Invest in AI management skills, not just AI usage skills. As one engineer in the BCG study put it: "I was working harder to manage the tools than to actually solve the problem." In our work with clients at SVA, we see this consistently. The teams that struggle most aren't the ones using the least AI. They're the ones using it without a framework for problem framing, analysis planning, and strategic prioritization. These meta-skills — knowing when and how to engage AI rather than defaulting to it for everything — are what separate productive AI use from cognitive overload. McKinsey's research reinforces this: 48% of employees say they'd use AI more if they received proper training. The gap isn't about prompting. It's about thinking.
Shift your metrics from activity to impact. Measuring AI adoption by token consumption, lines of code generated, or number of tools deployed incentivizes exactly the kind of use that causes brain fry. Meta, for example, now includes the number of lines of AI-generated code as a performance metric for engineers. The BCG researchers flag this as precisely the wrong signal. Start from a clear business objective with measurable outcomes instead. And exercise caution when responding to efficiency innovations: don't rush to backfill work that a creative employee just automated. Doing so feels punitive and discourages the kind of thoughtful AI use that actually works.
Follow a journey, not a jump. This is where we see the most well-intentioned organizations get it wrong. They try to go from curiosity to full-scale AI integration in one leap, deploying multiple tools across multiple teams before anyone has built the muscle to use them well. That's a recipe for brain fry.
At SVA, we work with clients along what we call the AI Journey: a progression that moves from governance and readiness, through adoption and enablement, into integration and eventually mastery. Brain fry risk isn't uniform across that journey — it peaks early, not because AI is hardest there, but because there's no shared framework yet. Everyone is experimenting alone, with different tools, in different ways. Each person is carrying the full weight of figuring out AI on their own, and the mental load is enormous.
Most organizations start in that early zone — curiosity, some individual experimentation, inconsistent usage. That's fine, but it's also the highest-risk zone for brain fry if it stays that way too long. The first real step is building a foundation: getting clear on where AI fits in your business and helping people get comfortable using it in practical, low-risk ways.
From there, it shifts into adoption — teams using AI more consistently, with shared use cases and lightweight guidance. Brain fry risk starts to drop here because structure absorbs the cognitive load that individual brains were carrying alone. Then comes integration, where AI becomes part of how work actually gets done: reports, meeting summaries, data analysis. Less experimenting, more standard ways of working. And over time, that's where more advanced use cases emerge — automation, decision support, custom solutions.
The key: you don't have to get to the far right overnight. The organizations seeing the most success start with a defined use case, prove value, then build on it. In the context of brain fry, this matters enormously. A deliberate, sequenced approach to AI adoption doesn't just reduce risk. It protects the cognitive capacity your people need to make AI work in the first place.
The Bottleneck Isn’t the Technology, it’s How We’ve Designed Work Around it
AI brain fry doesn't mean AI is the problem. It means we haven't caught up to AI with the way we design work. The bottleneck isn't the technology, it's how we're asking humans to interact with it: how many tools, how much oversight, how little structure.
That's not a reason to pump the brakes. It's a reason to get intentional. The organizations that figure this out first will keep their best people, make better decisions, and get far more from AI than the ones still measuring success by how many tools they've deployed.
The question isn't whether your team is using AI. It's whether you've designed the work so their brains can keep up.
© 2026 SVA Consulting
Studies & Sources Cited:
- BCG/HBR Study (March 2026) — Bedard, Kropp, Hsu, Karaman, Hawes, & Kellerman. "When Using AI Leads to 'Brain Fry.'" Harvard Business Review. Study of 1,488 full-time U.S. workers. https://hbr.org/2026/03/when-using-ai-leads-to-brain-fry
- David Lewis — Information Fatigue Syndrome (1996) — Commissioned by Reuters. Coined "Information Fatigue Syndrome" after finding 67% of workers suffered from information overload stress. https://paginaspersonales.deusto.es/abaitua/konzeptu/fatiga.htm
- Tarafdar & Ragu-Nathan — Technostressors (2007) — Formalized five technostressors: techno-overload, techno-complexity, techno-invasion, techno-insecurity, techno-uncertainty. Published in: Information Systems Research, "The Impact of Technostress on Role Stress and Productivity"
- John Sweller — Cognitive Load Theory (1988) — Established that working memory has a hard ceiling and performance collapses when exceeded. Published in: Cognitive Science, "Cognitive Load During Problem Solving: Effects on Learning"
- Sophie Leroy — Attention Residue (2009) — Showed that switching tasks leaves "attention residue" on the prior task, degrading performance. Published in: Organizational Behavior and Human Decision Processes, "Why Is It So Hard to Do My Work?"
- Roy Baumeister — Decision Fatigue / Ego Depletion — Demonstrated that decision-making quality degrades with accumulated decisions. Published across multiple works, notably: Willpower: Rediscovering the Greatest Human Strength (2011)
- Microsoft Work Trend Index (2025) — Found 80% of global workforce lacks time/energy, nearly half say work feels "chaotic and fragmented." https://www.microsoft.com/en-us/worklab/work-trend-index
- ActivTrak — 2026 State of the Workplace — Average focused work session shrunk to 13 min 7 sec, down 9% since 2023. https://www.activtrak.com/resources/state-of-the-workplace/
- McKinsey — "Reconfiguring Work: Change Management in the Age of Gen AI" — 48% of employees would use AI more with formal training; 70% of AI transformation should focus on people and processes. https://www.mckinsey.com/capabilities/quantumblack/our-insights/reconfiguring-work-change-management-in-the-age-of-gen-ai