American Association for Physician Leadership

Professional Capabilities

A New Model for Ethical Leadership

Max H. Bazerman

March 20, 2023


Summary:

Rather than try to follow a set of simple rules (“Don’t lie.” “Don’t cheat.”), leaders and managers seeking to be more ethical should focus on creating the most value for society.





Autonomous vehicles will soon take over the road. This new technology will save lives by reducing driver error, yet accidents will still happen. The cars’ computers will have to make difficult decisions: When a crash is unavoidable, should the car save its single occupant or five pedestrians? Should the car prioritize saving older people or younger people? What about a pregnant woman—should she count as two people? Automobile manufacturers need to reckon with such difficult questions in advance and program their cars to respond accordingly.

In my view, leaders answering ethical questions like these should be guided by the goal of creating the most value for society. Moving beyond a set of simple ethical rules (“Don’t lie,” “Don’t cheat”), this perspective—rooted in the work of the philosophers Jeremy Bentham, John Stuart Mill, and Peter Singer—provides the clarity needed to make a wide variety of important managerial decisions.

For centuries philosophers have argued over what constitutes moral action, theorizing about what people should do. More recently behavioral ethicists in the social sciences have offered research-based accounts of what people actually do when confronted with ethical dilemmas. These scientists have shown that environment and psychological processes can lead us to engage in ethically questionable behavior even if it violates our own values. If we behave unethically out of self-interest, we’re often unaware that we’re doing so—a phenomenon known as motivated blindness. For instance, we may claim that we contribute more to group tasks than we actually do. And my colleagues and I have shown that executives will unconsciously overlook serious wrongdoing in their company if it benefits them or the organization.

Maximizing Value

My approach to improving ethical decision-making blends philosophical thought with business-school pragmatism. I generally subscribe to the tenets of utilitarianism, a philosophy initially offered by Bentham, which argues that ethical behavior is behavior that maximizes “utility” in the world—what I’ll call value here. This includes maximizing aggregate well-being and minimizing aggregate pain, goals that are helped by pursuing efficiency in decision-making, reaching moral decisions without regard for self-interest, and avoiding tribal behavior (such as nationalism or in-group favoritism). I’m guessing that you largely agree with these goals, even if you hew to philosophies that focus on individual rights, freedom, liberty, and autonomy. Even if you are committed to another philosophical perspective, try to appreciate the goal of creating as much value as possible within the limits of that perspective.

In general, the decisions endorsed by utilitarianism align with most other philosophies most of the time and so provide a useful gauge for examining leadership ethics. But like other philosophies, strict utilitarianism doesn’t always serve up easy answers. Its logic and limits can be seen, for example, in the choices facing manufacturers of those self-driving cars. If the goal is simply to maximize value, the automobiles should be programmed to limit collective suffering and loss, and the people in the car shouldn’t be accorded special status. By that calculus, if the car must choose between sparing the life of its single occupant and sparing the lives of five people in its path, it should sacrifice the passenger.

Clearly this presents a host of issues—What if the passenger is pregnant? What if she’s younger than the pedestrians?—and no simple utilitarian answer for how best to program the car exists.

Furthermore, manufacturers could reasonably argue that people would be less likely to buy a car that doesn’t prioritize their lives. So car companies that didn’t prioritize the passenger would be in a weaker competitive position than those that did—and car buyers might well opt for less-safe cars that are driven by humans. Nevertheless, utilitarian values can be usefully applied in considering what sort of regulation could help create the greatest benefit for all.

Although the autonomous-vehicle case represents a tougher ethical decision than most managers will ever face, it highlights the importance of thinking through how your decisions, large and small, and the decisions of those you manage, can create the most value for society. Often people think of ethical leaders as those who adhere to the simple rules I’ve mentioned. But when leaders make fair personnel decisions, devise trade-offs that benefit both sides in a negotiation, or allocate their own and others’ time wisely, they are maximizing “utility”—creating value in the world and thereby acting ethically and making their organizations more ethical as a whole.

Overcoming Barriers

Consider two questions posed by the psychologist Daniel Kahneman and colleagues:

  1. How much would you pay to save 2,000 migrating birds from drowning in uncovered oil ponds?

  2. How much would you pay to save 200,000 migrating birds from drowning in uncovered oil ponds?

Their research shows that people who are asked the first question offer about the same amount as do people who are asked the second question. Of course, if our goal is to create as much value as possible, a difference in the number of birds should affect how much we choose to pay. This illustrates the limitations of our ethical thinking and suggests that improving ethical decision-making requires deliberately making rational decisions that maximize value rather than going with one’s gut.

The concept of bounded rationality, which is core to the field of behavioral economics, sees managers as wanting to be rational but influenced by biases and other cognitive limitations that get in the way. Scholars of decision-making don’t expect people to be fully rational, but they argue that we should aspire to be so in order to better align our behavior with our goals. In the ethics domain we struggle with bounded ethicality—systematic cognitive barriers that prevent us from being as ethical as we wish to be. By adjusting our personal goals from maximizing benefit for ourselves (and our organizations) to behaving as ethically as possible, we can establish a sort of North Star to guide us. We’ll never reach it, but it can inspire us to create more good, increasing well-being for everyone. Aiming in that direction can move us toward increasing what I call maximum sustainable goodness: the level of value creation that we can realistically achieve.

Executives unconsciously overlook wrongdoing if it benefits them or the company.

Trying to create more value requires that we confront our cognitive limitations. As readers of Kahneman’s book Thinking, Fast and Slow know, we have two very different modes of decision-making. System 1 is our intuitive system, which is fast, automatic, effortless, and emotional. We make most decisions using System 1. System 2 is our more deliberative thinking, which is slower, conscious, effortful, and logical. We come much closer to rationality when we use System 2. The philosopher and psychologist Joshua Greene has developed a parallel two-system view of ethical decision-making: an intuitive system and a more deliberative one. The deliberative system leads to more-ethical behaviors. Here are two examples of strategies for engaging it:

First, make more of your decisions by comparing options rather than assessing each individually. One reason that intuition and emotions tend to dominate decision-making is that we typically think about our options one at a time. When evaluating one option (such as a single job offer or a single potential charitable contribution), we lean on System 1 processing. But when we compare multiple options, our decisions are more carefully considered and less biased, and they create more value. We donate on the basis of emotional tugs when we consider charities in isolation; but when we make comparisons across charities, we tend to think more about where our contribution will do the most good. Similarly, in research with the economists Iris Bohnet and Alexandra van Geen, I found that when people evaluate job candidates one at a time, System 1 thinking kicks in, and they tend to fall back on gender stereotypes. For example, they are more likely to hire men for mathematical tasks. But when they compare two or more applicants at a time, they focus more on job-relevant criteria, are more ethical (less sexist), hire better candidates, and obtain better results for the organization.

The second strategy involves adapting what the philosopher John Rawls called the veil of ignorance. Rawls argued that if you thought about how society should be structured without knowing your status in it (rich or poor, man or woman, Black or white)—that is, behind a veil of ignorance—you would make fairer, more-ethical decisions. Indeed, my recent empirical research with Karen Huang and Joshua Greene shows that those who make ethical decisions behind a veil of ignorance do create more value. They are more likely, for instance, to save more lives with scarce resources (say, medical supplies), because they allocate them in less self-interested ways. Participants in our study were asked whether it was morally acceptable for oxygen to be taken away from a single hospital patient to enable surgeries on nine incoming earthquake victims. They were more likely to agree that it was when the “veil” obscured which of the 10 people they might be. Not knowing how we would benefit (or be harmed) by a decision keeps us from being biased by our position in the world.

A related strategy involves obscuring the social identity of those we judge. Today more and more companies eliminate names and pictures from applications in an initial hiring review to reduce biased decision-making and increase the odds of hiring the most-qualified candidates.

Creating Value Through Trade-offs

Which is more important to you: your salary or the nature of your work? The wine or the food at dinner? The location of your home or its size? Strangely, people are willing to answer these questions even without knowing how much salary they’d need to forgo to have more-interesting work, or how much more space they could have if they lived five miles farther from work or school, and so forth. The field of decision analysis argues that we need to know how much of one attribute will be traded for how much of the other to make wise decisions. Selecting the right job, house, vacation, or company policy requires thinking clearly about the trade-offs.

The easiest trade-offs to analyze involve our own decisions. Once two or more people are engaged in a decision and their preferences differ, it’s a negotiation. Typically, negotiation analysis focuses on what is best for a specific negotiator. But to the extent that you care about others and society at large, your decisions in negotiation should tilt toward trying to create value for all parties.

This is easy to see in a common family negotiation—one in which I’ve been involved hundreds of times. Imagine that you and your partner decide one evening to go out to dinner and then watch a movie. Your partner suggests dinner at an upscale Northern Italian restaurant that has recently reopened. You counterpropose your favorite pizza joint. The two of you compromise on a third establishment, which has good Italian food and pizza that’s a bit fancier than what your preferred pizza place offers. During dinner your partner proposes that you watch a documentary; you counterpropose a comedy; and you compromise on a drama. After a good (but not great) evening, you both realize that because your partner cared more about dinner and you cared more about the movie, choosing the upscale Northern Italian restaurant and the comedy would have made for a better evening.

This comparatively trivial example illustrates how to create value by looking for trade-offs. Negotiation scholars have offered very specific advice on ways to find more sources of value. These strategies include building trust, sharing information, asking questions, giving away value-creating information, negotiating multiple issues simultaneously, and making multiple offers simultaneously.

If you’re familiar with negotiation strategy, you appreciate that most important negotiations involve a tension between claiming value for yourself (or your organization) and creating value for both parties—enlarging the pie. Even when they know that the size of the pie isn’t fixed, many negotiators worry that if they share the information needed to create value for all, the other party may be able to claim more of the value created—and they don’t want to be suckers. All the leading books on managerial negotiations highlight the need to create value while managing the risk of losing out.

Whereas many experts would define negotiation ethics in terms of not cheating or lying, I define it as putting the focus on creating the most value (which is of course helped by being honest). You don’t ignore value claiming but, rather, consciously prevent it from getting in the way of making the biggest pie possible. Even if your counterpart claims a bit of extra value as a result, a focus on value creation is still likely to work for you in the long run. Your losses to the occasional opportunistic opponent will be more than compensated for by all the excellent relationships you develop as an ethical negotiator who is making the world a bit better.

Using Time to Create Value

People tend not to think of allocating time as an ethical choice, but they should. Time is a scarce resource, and squandering it—your own or others’—only compromises value creation. Conversely, using it wisely to increase collective value or utility is the very definition of ethical action.

Consider the experience of my friend Linda Babcock, a professor at Carnegie Mellon University, who noticed that her email was overflowing with requests for her to perform tasks that would help others but provide her with little direct benefit. She was happy to be a good citizen and do some of them, but she didn’t have time to take on all of them. Suspecting that women were being asked more often than men to perform tasks like these, Linda asked four of her female colleagues to meet with her to discuss her theory. At that gathering the I Just Can’t Say No club was born. These female professors met socially, published research, and helped one another think more carefully about where their time would create the most value.

Their concept has implications for all of us who claim we’re short on time: You can consider a request for your time as a request for a limited resource. Rather than making intuitive decisions out of a desire to be nice, you can analyze how your time, and that of others, will create the most value in the world. That may free you to say no, not out of laziness but out of a belief that you can create more value by agreeing to different requests.

Allocating tasks among employees offers managers other opportunities to create value. One helpful concept is the notion of comparative advantage, introduced by the British political economist David Ricardo in 1817. Many view it as an economic idea; I think of it as a guide to ethical behavior. Assessing comparative advantage involves determining how to allow each person or organization to use time where it can create the most value. Organizations have a comparative advantage when they can produce and sell goods and services at a lower cost than competitors do. Individuals have a comparative advantage when they can perform a task at a lower opportunity cost than others can. Everyone has a source of comparative advantage; allocating time accordingly creates the most value.

Ricardo’s concept can be seen in many organizations where one individual is truly amazing at lots of things. Picture a tech start-up where the founder has the greatest technical ability but it’s only a bit greater than that of the next-most-talented technical person. Yet the founder is dramatically more effective than all other employees at pitching the company to investors. She has an absolute advantage on technical issues, but her comparative advantage is in dealing with external constituencies, and more value will be created when she focuses her attention there. Many managers instinctively leverage their and their employees’ absolute advantage rather than favoring their comparative advantage. The result can be a suboptimal allocation of resources and less value creation.

Integrating Your Ethical Self

Whatever your organization, I’m guessing it’s quite socially responsible in some ways but less so in others, and you may be uncomfortable with the latter. Most organizations get higher ethical marks on some dimensions than on others. I know companies whose products make the world worse, but they have good diversity and inclusion policies. I know others whose products make the world better, but they engage in unfair competition that destroys value in their business ecosystem. Most of us are ethically inconsistent as well. Otherwise honest people may view deception in negotiation with a client or a colleague as completely acceptable. If we care about the value or harm we create, remembering that we’re likely to be ethical in some domains and unethical in others can help us identify where change might be most useful.

Think about how you can influence your colleagues with the norms you set.

Andrew Carnegie gave away 90% of his wealth—about $350 million—to endow an array of institutions, including Carnegie Hall, the Carnegie Foundation, and more than 2,500 libraries. But he also engaged in miserly, ineffective, and probably criminal behavior as a business leader, such as destroying the union at his steel mill in Homestead, Pennsylvania. More recently, this divide between good and bad is evident in the behavior of the Sackler family. The Sacklers have made large donations to art galleries, research institutes, and universities, including Harvard, with money earned through the family business, Purdue Pharma, which made billions by marketing—and, most experts argue, overmarketing—the prescription painkiller OxyContin. By 2018 OxyContin and other opioids were responsible for the deaths of more than 100 Americans a day.

All of us should think about the multiple dimensions where we might create or destroy value, taking credit when we do well but also noticing opportunities for improvement. We tend to spend too little time on the latter task. When I evaluate various aspects of my life, I can identify many ways in which I have created value for the world. Yet I can also see where I might have done far better. My plan is to do better next year than last year. I hope you will find similar opportunities in your own life.

Increasing Your Impact as a Leader

Leaders can do far more than just make their own behavior more ethical. Because they are responsible for the decisions of others as well as their own, they can dramatically multiply the amount of good they do by encouraging others to be better. As a leader, think about how you can influence your colleagues with the norms you set and the decision-making environment you create.

People follow the behavior of others, particularly those in positions of power and prestige. Employees in organizations with ethical leaders can be expected to behave more ethically themselves. One of my clients, a corporation that gets rave reviews for its social-responsibility efforts, created an internal video featuring four high-level executives, each telling a story about going above the boss’s head at a time when the boss wasn’t observing the ethical standards espoused by the corporation. The video suggested that questioning authority is the right thing to do when that authority is destroying societal value. By establishing norms for ethical behavior—and clearly empowering employees to help enforce it—leaders can affect hundreds or even thousands of other people, motivating and enabling them to act more ethically themselves.

Leaders can also create more value by shaping the environment in which others make decisions. In their book Nudge, Richard Thaler and Cass Sunstein describe how we can design the “architecture” surrounding choices to prompt people to make value-creating decisions. Perhaps the most common type of nudge involves changing the default choice that decision-makers face. A famous nudge encourages organ donation in some European nations by enrolling citizens in the system automatically, letting them opt out if they wish. The program increased the proportion of people agreeing to be donors from less than 30% to more than 80%.

Leaders can develop new, profitable products and make the world a better place through effective nudging. After publishing a paper on ethical behavior, for example, I received an email from a start-up insurance executive named Stuart Baserman. His company, Slice, sells short-term insurance to people who run home-based businesses. He was looking for ways to get policyholders to be more honest in the claims process, and we worked together to develop some nudges.

We created a process whereby claimants use a short video taken with a phone to describe a claim. This nudge works because most people are far less likely to lie in a video than in writing. Claimants are also asked verifiable questions about a loss, such as “What did you pay for the object?” or “What would it cost to replace it on Amazon.com?”—not “What was it worth?” Specific questions nudge people to greater honesty than ambiguous questions do. And claimants are asked who else knows about the loss, because people are less likely to be deceptive when others might learn about their corruption. These nudges not only reduce fraud and make the insurance business more efficient but also allow Slice to benefit by helping people to be ethical.

CONCLUSION

New ethical challenges confront us daily, from what algorithm to create for self-driving cars to how to allocate scarce medical supplies during a pandemic. As technology creates amazing ways to improve our lives, our environmental footprint becomes a bigger concern. Many countries struggle with how to act when their leaders reject System 2 thinking and even truth itself. And in too many countries, finding collective value is no longer a national goal. Yet we all crave direction from our leaders. I hope that the North Star I’ve described influences you as a leader. Together we can do our best to be better.

Copyright 2020 Harvard Business School Publishing Corporation. Distributed by The New York Times Syndicate.

Max H. Bazerman

Max H. Bazerman is the Jesse Isidor Straus Professor of Business Administration at Harvard Business School.

Interested in sharing leadership insights? Contribute



For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)