Operations and Policy

Gen AI Won’t Make Your Employees Experts

Harvard Business Review

April 16, 2026


Summary:

Generative AI can help workers perform unfamiliar tasks more quickly, but it doesn’t eliminate the performance gap between novices and experts. Researchers explored this dynamic by running a controlled writing experiment with employees at a fintech firm, dividing participants into three groups based on their level of relevant expertise.





In fields ranging from copywriting to software development, leaders are betting that gen AI can help employees take on more-advanced responsibilities. Research from MIT professor David Autor and others has shown that gen AI shortens the time it takes novices to gain competence at new tasks. But there’s still much we don’t know about the technology’s potential to upskill workers, including one key question: Can it help them perform tasks as well as experts do?

To try to answer that, researchers from Stanford University and Harvard Business School’s Digital Data Design Institute ran a controlled experiment involving 78 employees at IG Group, a United Kingdom–based fintech firm. They began by putting the employees into three groups: experts, adjacent outsiders, and distant outsiders. The experts were writers who regularly drafted articles for IG’s website. The adjacent outsiders were marketing specialists from the writers’ department who had no article-writing experience but had a general understanding of what the writers did. The distant outsiders were developers and data scientists who had no marketing or writing background at all. Each group was asked to complete two tasks: conceptualize and write an article like those found on the company’s website. The researchers randomly assigned gen AI to help some participants but not others. IG executives then rated the results of each assignment on a scale from 1 (lowest grade) to 5 (highest).

Advertisement
Ad 3 MC In-Person

When conceptualizing an article without help from gen AI, the writers got the highest average score (3.82), followed by the marketing specialists (3.04) and the technologists (3.02). Those results revealed a significant skill gap between the experts and the others. When the subjects were given gen AI assistance, however, the gap narrowed: Concepts developed by writers scored 4.12, on average, while those developed by marketing and technology specialists scored 4.18 and 4.05, respectively. In other words, marketers using AI slightly outperformed writers using AI—and all three groups that used AI outperformed writers who didn’t.

However, when it came to writing the articles, the results differed. Without gen AI, the writers performed the best of all the groups. Yet even using AI couldn’t help nonexperts produce the same quality of work as the experts. Writers, predictably, performed the best of those using the technology (3.96, on average). Marketing specialists aided by AI were close behind (3.92). But the technology specialists aided by AI didn’t do as well; in fact, their scores with and without gen AI were essentially the same (3.38 and 3.42, respectively).

The Gen AI Wall

Why did gen AI boost performance for one task more than for the other—and help the technology specialists so little at writing?

After conducting interviews with participants, the researchers concluded that the further removed workers were from the knowledge needed for a task, the less likely they were to perform as well as colleagues with relevant expertise—even with gen AI assistance. Nonexperts using AI did better at conceptualization because it required less expertise than writing did; people just had to understand whether a proposed topic was good enough. Writing an article, however, involved knowing how to convey the desired message in the right language. One participant offered a metaphor to illustrate this distinction: Conceptualizing is like imagining running a marathon, but writing is like actually running it, which calls for a completely different level of expertise.

And expertise, the researchers found, is what allowed humans to partner more effectively with the AI tools. The marketing specialists understood the general language the writers used and had enough domain knowledge to refine the gen AI–produced content. But the technology specialists (whose work had nothing to do with writing) could not effectively use or improve the AI’s suggestions. They lacked the intuition and knowledge needed to make good decisions about what language to keep and what to discard. The researchers termed this phenomenon “the AI wall,” the limit to how much gen AI can help people perform tasks outside their area of expertise.

This finding has implications for how organizations deploy gen AI tools. It challenges the assumption that the technology can flatten skill hierarchies and enable what academics call “universal task fluidity.” Instead, the researchers contend, gen AI’s effectiveness depends on the expertise distance between the user and the task domain—and they argue that the AI wall is relevant beyond the context of writers and technology specialists.

The researchers recommend two best practices for pairing gen AI with employees of varying levels of expertise:

Don’t overestimate gen AI’s abilities. It’s critical for employees to have a general understanding of, and some experience with, the area they’re applying AI in. Their knowledge should at least be extensive enough to let them assess and improve AI-generated work. During the writing study, for instance, many technologists simply copied and pasted gen AI’s suggestions into articles, because they lacked the nuanced judgment for adjusting and integrating the language. “AI isn’t a magic fix for everything at work if it is not able to fully automate tasks,” says Luca Vendraminelli, the Stanford postdoctoral researcher who led the study. “When AI can’t do the job alone and it replaces experts, it will help some people narrow the gap between themselves and experts, but only in certain situations and when the conditions are right. It’s not a one-size-fits-all solution.”

Rethink how work is done. Consider how your organization needs to change once employees start using gen AI effectively. To get the most value from it, the business may need to alter processes, decision-making approaches, and the ways teams work together. Gen AI tools may even blur job titles in related fields, such as SEO specialist and content strategist. Using them to bridge larger divides—such as those between marketing, sales, and product teams—is much harder, though, because those jobs are tied to different expertise, budgets, and power structures. Designing jobs to be broader and more flexible can help overcome that challenge, but making the shift requires structural and cultural changes.

And as you integrate gen AI into workflows, consider the human context: Who is using it? What do those people know? How well do they interpret and refine AI outputs? “AI can only take people so far,” says Vendraminelli. “Expertise is irreplicable. No technology can substitute for it.”

About the research: “The GenAI Wall Effect: Examining the Limits to Horizontal Expertise Transfer Between Occupational Insiders and Outsiders,” by Luca Vendraminelli et al. (working paper, 2025)


“Gen AI Shortens the Journey to Expertise”

Olga Pirog is the former global head of data and AI transformation at IG Group, the company where the writing study was performed. She has spent two decades using data, analytics, and AI to improve commercial performance. Pirog spoke with HBR about how her team at IG used gen AI and whether it helped close the gap between experts and beginners. Edited excerpts of the conversation follow.

How did gen AI help the marketers write articles almost as well as writers did?

It gave them the hands-on skills they lacked. Marketers had the foundational knowledge because they knew what good content looked like, but they lacked the experience of writing it themselves. Gen AI acted as a bridge, allowing them to execute on par with our specialists. It democratized the craft of writing for those who already understood the concept of marketing.

What larger lesson did you draw from the experiment?

Gen AI shortens the journey to expertise—but it can’t replace real-world experience just yet. The AI system produced solid first drafts, which after the study freed the expert writers to refine the articles, adjust their tone, and make sure their SEO elements were right before publishing them.

Do you think gen AI could have eventually turned the marketers, or even the technologists, into expert writers?

It depends on where they started. We saw a divergence: For adjacent roles like marketers, the gap effectively closed—they matched the experts. But for distant roles, like our technologists, the gap remained wide. Because they lacked the foundational context of marketing, they couldn’t judge the AI’s output effectively. This suggests AI accelerates expertise, but only if you are already in the neighborhood of that domain.

Given how much AI helped the experts, should companies hire fewer novices?

That is the danger—many organizations are seeing a drop in junior hiring, but if we hire only experts to edit AI, we destroy the pipeline for cultivating future experts. You cannot develop taste or judgment without doing the work. My concern is that by optimizing for efficiency today, companies are eroding the training ground for tomorrow.

How did gen AI change your approach to training?

My view on how apprenticeship should work has shifted. I used to think the only way to learn was through tactical execution, grinding through hundreds of drafts to build muscle memory. But we saw that for people with the right context, AI handles that execution. The real bottleneck happens when you lack foundational knowledge and can’t judge if the AI is right or wrong. The training model should shift toward teaching people what makes the writing good rather than teaching novices how to write.

Copyright 2026 Harvard Business School Publishing Corporation. Distributed by The New York Times Syndicate.

Explore AAPL Membership benefits.

Harvard Business Review

Harvard Business Publishing (HBP) was founded in 1994 as a not-for-profit, wholly-owned subsidiary of Harvard University, reporting into Harvard Business School . Our mission is to improve the practice of management in a changing world. This mission influences how we approach what we do here and what we believe is important.

With approximately 450 employees, primarily based in Boston, with offices in New York City, India, and the United Kingdom, Harvard Business Publishing serves as a bridge between academia and enterprises around the globe through its publications and multiple platforms for content delivery, and its reach into three markets: academic, corporate, and individual managers. Harvard Business Publishing has a conventional governance structure comprising a Board of Directors , an internal Executive Committee , and Business Unit Directors.



About HBR

Interested in sharing leadership insights? Contribute



LEADERSHIP IS LEARNED™

For over 50 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL provides leadership development programs designed to retain valuable team members and improve patient outcomes.

©2026 American Association for Physician Leadership, Inc. All rights reserved.