Summary:
Leaders can’t afford to take a “wait and see” approach to adopting generative AI. They need a plan for applying it differently than others in the value chain. The authors introduce a framework for thinking about gen AI strategically and offer practical advice on how to apply gen AI to the tasks composing jobs.
The questions about generative AI that we hear most often from business leaders include: When will gen AI match the intelligence of my best employees? Is it accurate enough to deliver business value? Is my CIO moving fast enough to lead our AI transformation? What are my rivals doing with gen AI? But those questions are misdirected. They focus on the intelligence of gen AI and its trajectory—how good gen AI is and how fast it’s improving—rather than on its implications for business strategy. What leaders should be asking is this: How can my organization use gen AI effectively today, regardless of its limitations? And how can we use it to create a competitive advantage?
This article—which draws on our experience working with hundreds of managers, leading gen AI initiatives ourselves, and researching digital transformation and strategy—proposes a framework for thinking about gen AI strategically and offers practical advice. We argue that a cautious “wait and see” approach—motivated by gen AI’s flaws, such as hallucinations—is potentially dangerous. But we don’t mean to imply that speed wins. Strategy does. Companies need to apply gen AI differently from their competitors and from others in their value chain. Here’s the argument for moving forward now:
Nontechie employees can use gen AI without support from experts. For decades AI usage was largely confined to the domain of engineers, computer programmers, and data scientists. But gen AI, led by OpenAI’s ChatGPT, changed that by enabling interactions using natural language. Its breakthrough wasn’t just an improvement in intelligence; it was also a dramatic increase in access. Today everyone in the organization can use gen AI tools, and they don’t need deep technical expertise, the support of a data science team, or central IT’s approval. What’s more, gen AI is increasingly being embedded into the tools people already use—email, videoconferencing, spreadsheets, CRM software, ERP systems—lowering the barriers to adoption even further.
This advancement in human-computer interaction resembles the transition from early command-line computing to the graphical user interface (GUI). In the 1980s, Windows radically transformed personal computing—not by making computers significantly more powerful but by allowing people to access that power without knowing MS-DOS commands. In much the same way, gen AI makes sophisticated machine-learning models available to anyone who can converse with it via writing or eventually, speaking.
Value-creation opportunities exist now. Waiting for a flawless, all-powerful, agentic AI is a mistake. Despite its flaws, gen AI can save time, reduce costs, and unlock new value. Holding off because the output isn’t perfect misunderstands the opportunity. Gen AI can already deliver meaningful improvements and efficiencies in many areas of your business. The benchmark shouldn’t be perfection; it should be relative efficiency compared with your current ways of working.
Competitive advantage comes from using gen AI more strategically than others, not just faster. A lasting advantage from gen AI can only be achieved by applying it differently. Everyone has access to gen AI. If you and your competitors use similar tools for similar tasks, then most of the gains will ultimately flow to others in the value chain if new competition erodes margins. More perilously, your own customers and suppliers may disintermediate you by using it to take care of the tasks you previously performed for them. This means that competitive advantage will hinge on how distinctively you use gen AI: which tasks you delegate to it and reimagine, how you use human expertise to complement it, and what new possibilities you unlock.
Where and When to Use Generative AI
Gen AI’s ubiquitous access and versatility create a new challenge: narrowing down the possibilities to find the best place to begin. Rather than asking whether gen AI performs as well as a human, start by breaking down jobs into their component tasks and ask: Which of these is gen AI well suited to handle today?
Consider the following activities: hiring critical employees, diagnosing cancer, and providing psychotherapy to at-risk individuals. These are often cited as areas where gen AI tools are beginning to approach human levels of intelligence and sophistication. Yet the idea of replacing humans in these roles typically meets strong resistance—and for good reason. The potential consequences of an error here are significant. Misdiagnosing cancer or mishandling a vulnerable patient can have life-altering effects. Choosing the wrong hire for a key leadership role can damage a company’s culture for years.
Now consider another set of tasks: summarizing student course evaluations, screening job applicants’ résumés, and assigning hospital beds. What distinguishes these examples from the first set isn’t necessarily the intelligence required but the cost of getting it wrong. A course evaluation summary that misses a nuance or a preliminary résumé screen that overlooks a marginal candidate creates only limited risk. Assigning hospital beds relies primarily on explicit, structured data (such as availability, patient needs, and expected discharge rates), which AI systems can process reliably.
This illustrates an important principle: The suitability of gen AI for a given task depends not just on the capabilities of gen AI but on two deeper factors. The first is the cost of errors: how serious the consequences would be if gen AI makes a mistake. If an error in a task would lead to serious harm, financial loss, or reputational damage, then firms must be far more cautious about employing gen AI to perform it without human oversight. The second factor is the type of knowledge the task demands. Tasks that rely on explicit data (structured or unstructured information that can be captured and processed) such as screening résumés and summarizing course evaluations are well suited for gen AI. Other tasks—such as psychotherapy, hiring for soft skills, and nuanced leadership decisions—require tacit knowledge: empathy, ethical reasoning, intuition, and contextual judgment built through human experience. These tasks are fundamentally harder for gen AI to perform because they involve not just retrieving information but also interpreting nuance, responding flexibly to context, and applying judgment in ambiguous situations.
These two dimensions—cost of errors and type of knowledge required—form the foundation of our framework for identifying where and how to use gen AI effectively. (See the exhibit “A Framework for Choosing Where and How to Use Gen AI.”)

Applying the Framework
Applying the framework starts by asking the right questions about gen AI. Rather than focus on the intelligence of gen AI (how smart it is and how fast it’s improving), organizations should examine its usefulness, which depends heavily on the task at hand. They should ask: Where is the cost of errors acceptably low enough to use gen AI today? Even when human insight and creativity are required, are there components of these processes that gen AI could handle? To use the framework, start by breaking down jobs into their component activities and situating them on the framework, using as your guide the cost of making an error and the knowledge needed to complete the task. Placing the tasks in the appropriate quadrant makes it clear which ones gen AI can handle faster, cheaper, or better.
Now let’s walk through each of the four quadrants.
The no regrets zone. The lower-left quadrant, where the cost of errors is low and explicit knowledge is required, contains the clearest and most immediate opportunity for organizations. This is where gen AI should be deployed today and where AI agents will thrive in the future. Tasks in this quadrant rely on clear, documented data, and errors are relatively harmless. You don’t need perfect accuracy here. The real value lies in completing tasks faster, more cheaply, or at a greater scale than before.
Consider a few examples. Gen AI can screen résumés and quickly flag candidates who should be considered for jobs based on well-defined criteria. It can approve low-dollar reimbursements—a tedious but low-risk task. And it can quickly draft responses to common customer inquiries, such as questions about refund policies or shipping timelines. Using gen AI in place of humans for these tasks will save time, and the people who had been doing them can be redirected to higher-value interactions. In addition, there are valuable tasks in this quadrant that humans weren’t doing previously because they were too tedious, time-consuming, or expensive. One example: staffing every meeting with a human stenographer. Gen AI can capture the conversation in a meeting and extract key themes, action items, and decisions within seconds.
When considering whether to enlist gen AI for tasks in this quadrant, don’t ask whether gen AI’s output is as good as a human’s and how gen AI can be used for the things you already do. In addition, real breakthroughs can come not just from replacing old work but from unlocking work that was never feasible before. Here are the key questions to ask:
Are the cost savings and speed gains large enough from using gen AI that we can tolerate a slight impairment in the quality of output?
How can we use gen AI for the things we don’t do today or that are too costly to do?
The creative catalyst zone. The upper-left quadrant, with a low cost of errors and a need for tacit knowledge, is where gen AI can serve as a creative catalyst, helping humans perform tasks that often benefit from originality. Crucially, the refinement of gen AI’s output and the final judgment on what to adopt rest with humans. Mistakes can be tolerated because the quality of the results is subjective: There is no definitive “best” marketing slogan or “perfect” product design because people’s views of what is best or perfect are personal. Because the cost of getting tasks in this quadrant slightly wrong is low, gen AI can meaningfully augment human creativity by speeding up experimentation, generating a greater volume of ideas, and enabling broader participation in the creative process. Gen AI allows everyone—from entry-level staff to team members who may not have thought of themselves as creative to senior creatives—to think and work more like innovators. (See “How Generative AI Can Augment Human Creativity,” HBR, July–August 2023.)
The key to figuring out how to apply gen AI in this quadrant is to deconstruct the creative task and identify where gen AI can expand the capacity of humans to add value through their creativity. For example, marketers can use gen AI to produce 20 possible taglines instantly, giving creative teams a broader pool of options to refine. Designers can generate visual or functional variations rapidly and then manually select and perfect the most-promising concepts. Presentation creators can ask gen AI to outline key points, suggest narrative arcs, or generate visual mock-ups, freeing them to focus on tailoring the message to their audience. Even in training contexts, mock interviews or simulations can be generated quickly to enrich preparation exercises.
Don’t ask whether gen AI is as creative or original as a human—a standard it was never designed to meet. Here are the key questions you should ask:
Can gen AI save time for creatives?
Can it make it easier for noncreatives to participate in creative tasks?
The human-first zone. The upper-right quadrant is where the stakes are highest. In this domain gen AI may act as an enabler but not a decision-maker. Tasks here involve subjective judgment, situational nuance, and complex decision-making—and mistakes carry serious consequences, whether financial, legal, reputational, or personal. Trust, ethics, and long-term strategy are often on the line. Errors can have lasting consequences: A poor executive hire can damage a company’s culture; a strategic misstep can erode billions in value; a mishandled medical diagnosis can cost a life.
Tasks like hiring critical employees, setting strategy, integrating complex enterprise systems, navigating crises, and managing sensitive HR interventions all fall squarely into this quadrant. They carry high risk and demand judgment, contextual understanding, ethical reasoning, and emotional intelligence—qualities that are difficult to codify or reliably automate.
In these domains, gen AI should be used with extreme caution. It cannot replace the human role at the center of these decisions. Its contribution should be carefully constrained and supportive, not central. Yet a smart deconstruction of tasks in this quadrant reveals opportunities for gen AI to provide valuable support—it can expand a human’s capacity to perform these tasks without undermining that person’s control of the decision. For example, in hiring, gen AI can help refine job descriptions or suggest interview questions; in strategy, it can synthesize market data or surface emerging trends; in governance, it can model reputational risks; in crisis management, it can draft preliminary communications and monitor public reaction; in healthcare, it can help clinicians calculate risk scores to triage patients when deciding who requires immediate attention and who can wait to be treated; and in managing employees, it can propose elements of a performance-improvement plan. Leaders and knowledge workers all have some tasks that fall in this quadrant.
When assessing tasks, don’t waste your time wondering about when gen AI will be smart enough to do them autonomously. The critical question to pose is this:
Which tasks can gen AI assist with today to make human judgment more effective?
The quality control zone. The lower-right quadrant contains knowledge-heavy tasks that gen AI can technically perform well—because they are grounded in explicit, structured information—but for which even small mistakes could result in serious consequences. These are high-accountability domains such as law, finance, and software development, where information is clear and codified yet the standards for accuracy are extremely high. This quadrant is ideally suited for a human-in-the-loop model: Gen AI provides speed and scale while humans provide judgment, oversight, and final accountability.
Take the drafting of legal agreements. Traditionally, preparing a contract involves several stages: understanding client needs, composing clauses, negotiating terms, revising language, and approving the final document. Today a lawyer can use gen AI tools such as Harvey to generate a strong draft contract in minutes, freeing her up to focus on negotiations and final review. Similarly, in software development, gen AI tools like GitHub Copilot can generate boilerplate code or suggest debugging fixes, accelerating development cycles—although experienced developers must still conduct quality assurance and verify functionality. In financial due diligence, gen AI can scan large volumes of documents and detect anomalies or opportunities, but human analysts must interpret the findings in context. And in healthcare, gen AI can recommend patient bed assignments based on structured criteria while leaving the final decisions to clinical staff, who must weigh nuances missed by algorithms. With tasks that have high risk and need explicit knowledge, have gen AI handle the repeatable, data-heavy parts, and have humans perform the steps where nuance, interpretation, or final accountability really matter.
To identify tasks that fall into this domain, ask these questions:
Where is human expertise truly essential?
Which parts of the workflow can be safely delegated to gen AI?
It’s often said that those who use AI will replace those who don’t. But the reality is more complex: As the framework illustrates, some tasks are best done by AI alone, others through human-AI collaboration, and some still require purely human judgment. Rather than debating replacement versus complementarity, the key is understanding which tasks remain distinctly human.
Anticipate the Impact on Your Industry
The fact that your customers, suppliers, and competitors can access the same technology creates the paradox of access: Because everyone can use it, it becomes dramatically harder to capture value with it. If you and your competitors apply the technology to similar tasks and follow the same best practices, then everyone becomes more efficient but no one secures long-term profits from it. Competitive pressure ultimately causes the gains to go to customers and suppliers through lower prices or better terms. This is a pattern similar to the one from Internet 1.0: Early adopters enjoyed brief advantages, but as digital technologies spread, the benefits flowed to consumers, not firms. Think of the rise of airline e-ticketing in the 2000s. Carriers all competed using the same technology, and customers reaped the benefits of lower airfare. Since the 1990s, CAD and ERP software have streamlined manufacturing and supply chains, but now they are table stakes, not a source of advantage. These examples are reminders to be ready for the following developments:
AI-first entrants are coming. In the not-too-distant future your fiercest competition may not be your familiar peers but a new breed of solo entrepreneurs and micro-teams. Imagine starting a marketing agency today from the ground up. Rather than hiring dozens of people to conduct market research, write copy, design graphics, and answer questions from clients, a small team of experts (or even one intrepid entrepreneur) could eventually rely on AI for all these tasks. Such AI-first entrants could match your scope and speed while carrying a fraction of your headcount. The building blocks for this vision already exist in the form of software development agents and AI sales reps, with more tools on the horizon.
Customers and suppliers can use gen AI against you. Their access to gen AI can upend your bargaining power. Law firms have been dealing with a similar issue since the 1990s. Work that once required scores of paralegals and a complete law library could suddenly be done by one lawyer with an internet-connected PC. A company can now hire an in-house attorney for routine work instead of sending every matter to a full-fledged law firm. The number of U.S. lawyers employed as in-house counsel tripled from 1997 to 2020; they currently outnumber those employed in the 500 largest law firms. The shift squeezed Big Law on two fronts. Their customers pushed back on the once-untouchable billable hour: Today nearly 90% of large firms offer flat-fee or other pricing that is more favorable to the customer. And lawyers who once had no choice but to suffer 100-hour weeks at a white-shoe law firm can move in-house or start solo practices, empowered by digital tools that replace big-firm infrastructure.
Gen AI accelerates this pattern. With legal-research bots and contract-writing agents, corporate clients can pull even more legal work in-house. The same trend is occurring with other professional services, such as software development contracting, M&A consulting, and advertising. The most talented and entrepreneurial employees from those firms will have more and more options for where to work.
Building an AI-Based Competitive Advantage
As we’ve noted, moving quickly is important, but speed alone won’t put you ahead of the impending competition. You need a strategy to differentiate how your organization creates value with gen AI. We recommend taking the following steps:
Mandate broad access to technology. Everyone in your company has tasks in all four quadrants of the framework, and so everyone has the potential to do more by using gen AI. Every single person in your organization should evaluate which tasks can be handled—better or even if just serviceably—by gen AI. Also have each person consider tasks that previously were too costly or time-intensive to do but that gen AI could perform inexpensively and quickly—for example, sending personalized holiday greetings to every business contact over the past year or summarizing every meeting attended. Experimentation and training should be encouraged broadly—through top-down messaging that signals its importance and bottom-up forums where employees can share lessons learned. Doing these things will require building faster pathways for frontline teams to test and scale gen AI tools.
Start by removing the bottlenecks that keep these powerful tools out of the hands of your people. If access stalls at the IT desk or hides behind compliance forms, you cede ground to rivals whose staff can experiment in real time. IT departments understandably struggle to keep up with the relentless proliferation of ever-improving models and specialized applications. Delegating full control of gen AI to the CTO, no matter how capable, can slow progress. In 2023 JPMorgan Chase temporarily blocked its staff from using ChatGPT while its security teams performed third-party reviews—a sensible precaution but one that prevented 60,000 users from experimentation. Every organization faces this trade-off: Cybersecurity concerns are real, but if the loudest message employees hear is what not to try, innovation will only move as fast as your slowest approval queue. Many IT leaders want to take the maximum precautions to protect against all risks. But they should focus on guarding against the most-critical risks—such as the leakage of regulated or highly sensitive data (for example, personally identifiable information)—through targeted employee policies and vendor security reviews precisely defined to shield against those threats.
Once you’ve done this, it’s time to create a strategy. Differentiating what your organization does with gen AI will require two long-term efforts.
Reimagine all assets as data. The capabilities of the initial generations of gen AI were limited to the public data they were built on. Increasingly, firms are equipping employees with rich proprietary data—which can be accessed through gen AI search or used to train a model imbued with the knowledge of the firm. To follow suit, you must do the following:
Ascertain where the data resides in your organization today and centralize it. All companies need to start centralizing data that has been scattered across or siloed in business units, functions, and geographies. Your infrastructure can anchor your competitive advantage. Before the era of gen AI, in the 2000s, the casino operator Harrah’s Entertainment funneled every slot pull, hotel check-in, and dinner receipt into a single data warehouse. The insights it gleaned from its data trove allowed it to grow revenue faster than its competitors—they could copy the spectacle and glitz of Harrah’s casinos but not its data infrastructure or its culture of rapidly leveraging that data. Having the discipline to consolidate data is even more critical today, and not just for customer analytics. Generative AI allows a firm to extract insights from all its myriad messy and unstructured data—including from partners and through acquisitions—to drive decisions across the whole organization. It will take years to build the infrastructure to gather and make meaning of that data, so begin the effort now.
Identify the data that you aren’t yet collecting. Every activity of a business—from customer interactions to operational processes to internal emails and meetings—is a source of proprietary data to be tapped and leveraged. The data you don’t collect today is a seed you never plant; start capturing the critical data streams now so that a tree might bear fruit when you need it.
Redesign your organization. In the long term, it will not be enough to layer gen AI onto existing workflows. Organizations will need to redesign themselves around a gen AI–first vision of the business. To do that, you’ll need to organize to get the most out of your data and your people.
Let’s look at data first. Even proprietary data eventually becomes commoditized. But it is hard for others to copy an organization that is tailored to continually exploit it. In the 1990s Capital One rewired the whole bank around its data by combining marketing, risk, and IT teams and having them perform thousands of microexperiments a year. Operations, customer service, and HR teams supported this learning engine. Its most famous experiment, a “balance transfer” teaser-rate offer, let customers move outstanding balances from rival issuers to Capital One’s credit cards. The promotion drove explosive account growth. The firm closely tracked user behavior longitudinally, and over time the data warned that newer applicants were higher risk. That gave management the foresight to phase out the product. Meanwhile competitors, lacking this feedback loop, continued to copy the offer until their losses became catastrophic. Companies today will need to create a feedback loop between data and a continuous learning process to translate gen AI insights into action ahead of the marketplace.
You also need to revisit how you get the most out of your people. Generative-AI tools free up chunks of time, but early research suggests that the windfall can evaporate into idle tinkering, low-value busywork, or outright downtime (see “How Is Your Team Spending the Time Saved by Gen AI?” HBR, March–April 2025). To keep the savings from slipping away, treat time as you would any strategic resource: Manage it carefully. Managers should work with employees to estimate and track the hours AI shaves off their key tasks, set clear expectations for how those hours will be redeployed, and tie recognition or incentives to how effectively the saved time is used. (See the exhibit “Why Don’t Gen AI Gains Show Up in My P&L?”) These measures will have to evolve alongside the technology to ensure that AI-driven efficiency translates into real gains for the business and meaningful growth for employees.

Start thinking today about what an AI-first organization chart should look like, even if the changes won’t come until later, because it takes a long time to implement an organizational redesign. AI will eliminate some existing roles, most likely those with a high proportion of work in the “no regrets” quadrant (low cost of errors and explicit knowledge). In the other quadrants, gen AI will complement the work of people in the organization—but not necessarily the same people who are doing those tasks today. You will need to rethink the entire org chart. For instance, some functional employees may become cross-functional. And instead of supervising someone who works with software, middle managers may work directly with software. Maybe a few people will focus only on the quadrant of “human-first” tasks.
In summary, strategic differentiation will come from three sources: (1) rapid and targeted deployment of gen AI across tasks, which is valuable in the near term if your competitors remain fixated on intelligence or paralyzed by concerns like hallucinations; (2) proprietary data that enhances gen AI’s performance or process fixes that prevent its value from being lost to organizational bottlenecks; and (3) unique people, processes, and culture—the “complementary assets” that make gen AI more valuable inside one organization than it is inside others.
. . .
Common misperceptions are keeping many organizations from capturing the full potential of gen AI. Some leaders believe gen AI isn’t yet intelligent enough to be useful; they focus on its imperfections rather than recognizing its potential to lower costs even when quality isn’t perfect. Others fear that its error rate makes it too risky to adopt; they miss the distinction that it’s the cost of errors that matters most. Some insist that gen AI must be perfectly accurate before deployment; they don’t appreciate that in many tasks, 100% accuracy isn’t essential. Still others are frustrated that savings at the task level aren’t yet visible in the P&L; they forget that saving time across tasks doesn’t automatically translate into saved dollars without intentional management or that sustainable advantage won’t come from merely adopting gen AI but from using it differently. The organizations that recognize these traps, rethink their assumptions, and move deliberately to turn gen AI from a general capability into a true source of competitive advantage will be the ones that succeed.
Copyright 2025 Harvard Business School Publishing Corporation. Distributed by The New York Times Syndicate.
Topics
Technology Integration
Governance
Systems Awareness
Related
Healthcare Insights: Curated for HALM November/December 2025Physicians Can Play a Role in the Safety of Patients Considering Treatments AbroadIs This a Moment for Strategic Hibernation?Recommended Reading
Strategy and Innovation
Healthcare Insights: Curated for HALM November/December 2025
Strategy and Innovation
Physicians Can Play a Role in the Safety of Patients Considering Treatments Abroad
Strategy and Innovation
Is This a Moment for Strategic Hibernation?
Operations and Policy
Stop Running So Many AI Pilots
Operations and Policy
Managing a Productive Prima Donna
Operations and Policy
From Frustration to Satisfaction: Enhancing Phone Skills in Your Medical Office


