American Association for Physician Leadership

Quality and Risk

Implementing an Automated Patient Satisfaction Survey

Paul Louiselle, MA, FACHE | Andrew Sandberg

August 8, 2018


Abstract:

Patient satisfaction is a key metric in evaluating the overall performance of a medical group. For this reason, third-party payers and provider network leaders are steadfast in requiring medical groups to collect and report such data. The good news is that the efficiency and cost-effectiveness of implementing and administering a patient satisfaction survey within a medical group has improved considerably over the past few years. Advances in the technology and improvements in compiling and comparing the data have reduced many of the barriers to acquiring this important feedback.




Over the years, our medical practice has used various patient satisfaction surveys, only to discontinue them because they were inefficient and costly to administer. We have obtained important feedback through these surveys, but the drain on our practice’s resources inevitably caused us to discontinue them. This cycle of adoption and termination of patient satisfaction surveys has been a common trend among other specialty groups as well.

As a practice that strongly believes in the value of patient feedback, it has been frustrating to us to abandon patient satisfaction surveys due to their inefficient and costly design. We have long believed that obtaining patient feedback would help us achieve our goal of improving patient satisfaction, one of the three important pillars within the Triple Aim framework we subscribe to (Figure 1).

Figure 1. Triple Aim framework.

Our commitment to the Triple Aim was an important variable prompting us to re-engage in the quest for patient satisfaction data. Another important variable was the trend of network leadership and third-party payers requiring medical groups to report such data. These requirements often are woven into provider agreements and have become central to a medical group’s ability to participate in the network, or, at a minimum, realize the maximum available reimbursement. Because increasing patient volume and maximizing reimbursement are key to our survival, meeting the network participation criteria was considered essential.

Vendor Selection

Although many vendors supply patient satisfaction surveys to medical groups, we ultimately selected the Physician Empowerment Suite offered through Constellation/MMIC. (Constellation is the holding company to our malpractice insurance group [MMIC]). The patient satisfaction survey it had developed received our immediate trust as we have had a longstanding and successful relationship with MMIC. More specifically, the idea of coupling patient satisfaction feedback with other risk-reduction initiatives was appealing.

After conducting an exhaustive due diligence analysis on Physician Empowerment Suite, we implemented its patient satisfaction tool under a short-term agreement. This agreement gave us the opportunity to evaluate Physician Empowerment Suite as an organization as well as evaluate the quality of the patient satisfaction data to be captured.

In a nutshell, our initial experience with Physician Empowerment Suite personnel and their survey tool were very positive. It was evident that Physician Empowerment Suite had developed a system by which our information could be collected and reported without draining our limited practice resources.

Survey Workflow

Our decision to implement a patient satisfaction survey was made easy by the simple, automated design of the service. The end-to-end process for the distribution, collection, and analysis of the survey and responses is designed with the utmost simplicity and efficiency. Figure 2 illustrates the workflow needed by our patient satisfaction survey. Each step is labeled “manual” or “automated” to convey the amount of human time and effort required.

Figure 2. Patient satisfaction survey workflow. EHR, electronic health record.

Part of the process of conducting the patient satisfaction survey is manual. Every two weeks, one of our staff members is assigned the task of generating a report and uploading that report to a portal within the Physician Empowerment Suite system. Unlike the previous systems we have used, this process is simple and efficient. We estimate that it takes less than 15 minutes every two weeks.

Although the patient satisfaction tool is efficient to use, it would be unrealistic to say that it is not time consuming. The time consumed, however, is in analyzing the results, not in administering the survey.

Response Rate and Bias

As with most “customer” surveys, a common challenge is obtaining a response rate high enough to identify statistically significant trends and patterns in the responses. Our response rate has fluctuated between 9% and 13% per month. The generally accepted average response rate for external (customer/patient) surveys is between 10% and 15%.(1)

One factor contributing to our less than optimal response rate is a lack of valid email addresses provided by many patients. Of all lines of patient data extracted from our electronic health record report, on average only about 45% of those patients have provided us with a valid e-mail address. This automatically reduces our potential response rate by nearly 50%, further challenging the statistical significance of the responses.

Most surveys tend to be completed by people who are either very satisfied or very dissatisfied with the service received.

We have implemented two initiatives to address our low response rate. First, the probability of a response has been found to be higher when the survey is sent shortly after the patient’s appointment. For this reason, our goal is to send each patient a survey within two weeks of his or her appointment. Our second initiative to address the low response rate is to improve the workflow used by our clinic staff to collect and enter the patients’ e-mail addresses. We are confident that we will identify opportunities to improve the e-mail accuracy rate by examining this workflow.

As with most surveys, these tend to be completed by those people who are either very satisfied or very dissatisfied with the service they have received. People who fall between those extremes are not always motivated to take the time needed to complete the survey. Because of this, obtaining an accurate picture of your practice’s performance can be difficult. As we begin to collect more data and increase our response rate, we expect we will improve our ability to draw conclusions from the data.

Feedback

As we had hoped, the survey results we have received have been enlightening and valuable. These data have provided us with an entirely new source of information to use in improving the service to our patients.

The survey questions (and results) are arranged in two broad categories: questions related to the provider the patient was treated by and questions related to the practice operations (Figure 3). Each category includes both structured and open-ended questions. The structured questions utilize a scale of 1 to 10, with 1 being the poorest rating and 10 being the best rating possible.

Figure 3. Survey question categories.

We have found the most valuable feedback to be the open-ended responses. These provide explicitly clear feedback about specific issues or incidents, making it easy for our practice to address them quickly and accordingly on an individual basis. The structured data feedback also is helpful, but it is more difficult to legitimately identify trends and draw conclusions about areas of our practice that may need improvement. Table 1 gives examples of the open-ended questions and our action plan for responding to that feedback.

Using Feedback

Because it had been a long time since our practice received any formal feedback from our patient population, our leadership team dedicated time discussing how best to utilize the feedback once it was collected. The following principles and recommendations spun out of these brainstorming sessions:

Principles

  • The purpose of collecting the data is to improve the service to our patients.

  • The data will not be used to punish individual employees.

  • The data will not be used to pit one practice location against another.

  • We will celebrate the positive feedback and work on correcting the negative.

Recommendations

On a quarterly basis, we will share the structured and open-ended feedback in the following ways:

  • Staff will receive the feedback that is specific to their clinic location. They will not receive data on any individual provider. Additionally, employee names that may have been cited through the open-ended responses will be hidden from other employees to maintain our adherence to the principle of not using the data to punish our employees.

  • Providers will receive feedback that is specific to them. They will not receive data identifying any other provider.

  • We will review the patient satisfaction feedback routinely through our process improvement meetings. Specific and ongoing improvement initiatives will be identified through the review of these data.

Conclusion

Given the relative ease of administrating a patient satisfaction survey, there are few excuses for avoiding this important activity. In addition to potentially improving patient volume and maximizing reimbursement, a medical practice can make considerable improvements in its service to its patients—which truly is the ultimate goal.

Choosing the right vendor and anticipating some of the hurdles will be key to successful implementation of a patient satisfaction survey. And using the data constructively keeps your staff focused on process deficiencies rather than on perceived deficiencies on the part of their fellow employees.

Reference

  1. Fryrear A. What’s a good survey response rate? SurveyGizmo. July, 2015 www.surveygizmo.com/resources/blog/survey-response-rates/ .

Paul Louiselle, MA, FACHE

Chief Executive Officer, Pediatric Surgical Associates, Ltd., Minneapolis, Minnesota; phone: 612-813-8006; e-mail: plouiselle@pediatricsurgical.com.


Andrew Sandberg

Operations Analyst, Pediatric Surgical Associates, Ltd., Minneapolis, Minnesota.

Interested in sharing leadership insights? Contribute



This article is available to Subscribers of JMPM.

Log in to view.

For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)