American Association for Physician Leadership

Physician Leadership, AI, and Employment Practices

Timothy E. Paterick, MD, JD, MBA


Nov 2, 2023


Physician Leadership Journal


Volume 10, Issue 6, Pages 24-26


https://doi.org/10.55834/plj.6842603892


Abstract

With a combination of access to large sets of training data on the internet, considerable increase in computing power, and novel breakthroughs in neural networks, AI has the potential to affect every organization and every employee.




Today, a combination of almost-daily discoveries and ever-increasing improvements amalgamated with tens of billions of dollars of capital means that every organization and every employee will be impacted by artificial intelligence, known as AI.

How do healthcare organizations and physician leaders survive, adapt, and thrive in this rapidly developing landscape? Every physician leader and every healthcare organization should spend time learning about and understanding how to respond to this AI tsunami.

The forces of AI are impacting employment decisions; physician leaders should avoid liability by being alert to the pitfalls.

AI AND EEOC GUIDANCE

Physician leaders are immersed in a fast-paced, constantly changing corporate medical world that is strained to make adjustments in hiring, promoting, and firing employees to meet the fiscal needs of the organization. All of these needs have been satisfied by AI’s algorithms, which facilitate the employers’ hiring, firing, or promoting requirements; however, many physician leaders are bewildered and searching for understanding.

On May 18, 2023, the U.S. Equal Employment Opportunity Commission (EEOC) announced the release of its second script of guidance regarding employers’ use of AI.(1) The EEOC’s non-binding guidance outlines key considerations that, in the EEOC’s opinion, help ensure that automated employment tools do not violate Title VII of the Civil Rights Act of 1964.(2)

The guidance emanates on the heels of reports that the EEOC is training staff on how to identify discrimination triggered by automated systems and AI tools,(3) and EEOC’s joint statement with officials from the Department of Justice, the Consumer Financial Protection Bureau, and the Federal Trade Commission emphasizing the agencies’ commitment to enforcing existing civil rights laws against biased and discriminatory AI hiring, promoting, and firing systems.

FUNDAMENTAL TERMINOLOGY

In recent years, employers have adopted algorithmic decision-making tools to assist in making decisions about recruitment, hiring, retention, promotion, transfer, performance monitoring, demotion, and dismissal. Employers have increasingly utilized the AI tools in an attempt to save time, increase objectivity, optimize employee performance, and decrease hiring/firing bias.

Understanding key concepts such as software, algorithm, and artificial intelligence will allow us to interpret the EEOC’s guidance better.

Software refers to information technology or procedures that provide instructions to a computer on how to perform a task or function. Distinctive types of software applications used in employment include software for automatic resume screening, hiring, interviewing, analytics, and worker management.

An algorithm is generally a set of instructions that are entered into a computer to accomplish an end goal. Human resources software and applications use algorithms to allow employers to process data to evaluate, rate, and make decisions about employee applicants.

Software or applications that include algorithmic decision-making tools are used at various stages of the employment process, from hiring, performance evaluations, and promotions to terminations.

Artificial intelligence (AI) is used by some employers and software vendors when developing algorithms to help employers evaluate, rate, and make critical decisions about job applicants and employees. Congress has defined AI to mean a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations and decisions influencing real or virtual environments.”

REAL-WORLD EMPLOYER PRACTICES

Employers trust software platforms that incorporate algorithmic decision-making at several stages throughout the human resources employment process. Resume scanners apply keywords to prioritize applicants. Employee monitoring software may be used to rate employees on various work metrics. Virtual assistants and chatbots might ask candidates about their qualifications and reject those who do not meet the threshold of pre-defined requirements.

Video interviewing software evaluates potential employees based on their facial expressions and speech patterns. Testing software, relying on applicants’ performance on an integrated game, supplies job fitness scores for potential employees based on personality, aptitudes, cognitive skills, and perceived cultural fitness.

IMPORTANT LEGISLATION

Title VII

Title VII, focused on disparate impact discrimination, prohibits employers from using tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin, if the tests or selection procedures are not “job related and consistent with business necessity.”

Employers who are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool should evaluate whether using the tool causes a substantially lower selection rate of individuals with a characteristic protected by Title VII. A selection rate refers to the proportion of applicants who are hired or promoted divided by the total number of applicants in the group.

In general, the “four-fifths rule” is used to determine whether the selection rate for one group is substantially different than the selection rate of another group. The four-fifths rule may be used to draw an initial inference that the selection rates for two groups are substantially different and prompt the employer to seek additional information about the procedure or algorithm in question.

If the employer who is in the process of developing a selection tool discovers that the use of the tool may have an adverse impact on individuals of a particular protected characteristic under Title VII, that employer should take steps to reduce the discriminatory impact, or select a different tool to avoid undertaking a selection process that violates Title VII. Failure to adopt a less-discriminatory algorithm than was considered during the development process may give rise to liability.

Americans with Disabilities Act

Although the Americans with Disabilities Act (ADA) was not discussed in the EEOC’s May 18, 2023 publication, the EEOC had previously issued technical guidance on the use of AI and discrimination in the work place under the ADA. The most common ways that an employers’ use of algorithmic decision-making tools could violate the ADA are:

  1. Failing to provide a “reasonable accommodation” necessary for a job applicant or employee to be rated fairly and accurately by the algorithm.

  2. Relying on an algorithmic decision-making tool that intentionally or unintentionally screens out an individual with a disability, even though the individual is able to do the job with a reasonable accommodation.

  3. Adopting an algorithm decision-making tool that violates the ADA’s restrictions on disability-related inquiries and medical examinations.

THE IMPORTANCE OF UNDERSTANDING THE RULES

An employer who administers a pre-employment test may be liable for a resulting Title VII or ADA discrimination charge, even if an outside vendor developed the test. Similarly, an employer may be held liable for the actions of their agents, including software vendors, if the employer has given them authority to act on the employer’s behalf.

If the vendor states that the AI tool should be expected to result in a substantially lower selection rate of individuals of a particular characteristic protected by Title VII or the ADA, the employer should consider whether the use of the tool is job-related and consistent with business necessity and whether there are alternatives that may reduce the likelihood of a disparate impact, yet satisfy the employer’s needs. Even when a vendor is incorrect about the assessment of the AI tool, and the tool results in disparate impact or discrimination, the employer could be held liable.

For these potential liabilities associated with AI hiring and firing tools, employers would be prudent to conduct ongoing surveillance of the technologies that they implement and question whether they have the potential for discrimination in their hiring and firing processes.

Congress and the White House have weighed in on these issues. On April 13, 2023, Senate Majority Leader Chuck Schumer announced a high-level framework outlining a new regulatory regime for AI,(5) and on May 1, 2023, the White House announced that it will be releasing a request for information to learn more about AI tools being used by employers to monitor, evaluate, and manage an array of workers.(6)

TAKE HOME POINTS

EEOC guidance emphasizes that automated decision-making tools would be treated as a selection procedure subject to the EEOC’s Uniform Guidelines on Employee Selection Procedures(4) when used to make decisions about whether to hire, promote, demote, or terminate applicants or current employees.

The EEOC guidance provided that if an employer uses automated decision-making tools, the employer may be responsible under Title VII if the procedure discriminates on a basis prohibited by Title VII, even if the test is developed by an outside vendor.

The four-fifths “rule of thumb” is a measure of adverse impact that determines whether the selection rate of one group is substantially lower than that of another group. The objective is to draw preliminary inferences and prompt further assessment.

The EEOC encourages employers to routinely scrutinize their AI tools for potentially disparate impact on individuals subjected to the automated selection procedure. The guidance states that if an employer fails to take steps to adopt a less discriminatory algorithm than was considered during the development process, this may give rise to liability.

REFERENCES

  1. U.S. Equal Employment Opportunity Commission. EEOC Releases New Resource on Artificial Intelligence and Title VII. Press Release. May 18, 2923. https://www.eeoc.gov/newsroom/eeoc-releases-new-resource-artificial-intelligence-and-title-vii .

  2. U.S. Equal Employment Opportunity Commission. Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964. https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial .

  3. Rainey R. EEOC to Train Staff on AI Based Bias as Enforcement Efforts Grow. Bloomberg Law News. May 5, 2023. http://news.blomberglaw.com/daily-labor-report-eeoc-to-train-staff-on-ai-based-bias-as-enforcement-efforts-grow .

  4. U.S. Equal Employment Opportunity Commission. 29 C.F.R. part 1607; EEOC Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures. April 19, 2023.

  5. Senate Democrats. Schumer Launches Major Effort To Get Ahead Of Artificial Intelligence. News Release. April 13, 2023. https://www.democrats.senate.gov/newsroom/press-releases/schumer-launches-major-effort-to-get-ahead-of-artificial-intelligence

  6. Mulligan D, Yang J. Hearing from the American People: How Are Automated Tools Being Used to Surveil, Monitor, and Manage Workers? The White House Office of Science and Technology Policy News Release. May 1, 2023. https://www.whitehouse.gov/ostp/news-updates/2023/05/01/hearing-from-the-american-people-how-are-automated-tools-being-used-to-surveil-monitor-and-manage-workers/

This article is available to AAPL Members.

Log in to view.

Timothy E. Paterick, MD, JD, MBA

Timothy E. Paterick, MD, JD, professor of medicine, Loyola University Chicago Health Sciences Campus in Maywood, Illinois.

Interested in sharing leadership insights? Contribute


For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)