The Fertility Clinic Success Rate and Certification Act of 1992 authorized the Centers for Disease Control and Prevention (CDC) to track all assisted reproductive technology (ART) cycles performed in America. The Act is restricted to those treatments that manipulate both eggs and sperm (i.e., egg retrieval, laboratory insemination and implantation in a uterus). The CDC now reports success rates for over 450 clinics across the country every year including over 250,000 cycles. In the 2016 ART National Summary Report, the national percentage of cycles using fresh embryos from fresh nondonor eggs resulting in term, normal weight and singleton live births ranged from two percent (women older than 42 years of age) to 21% (women younger than 35 years of age). The national percentage of transfers using frozen embryos from nondonor eggs resulting in term, normal weight and singleton live births ranged from 21% (women older than 42) to 35% (women under 35). The CDC also publishes clinic-level success rates for the general public to review. Women can download an Excel file and sort clinic performance based on their own situation and desired outcomes.
For a couple or woman trying to deliver a healthy, singleton baby using ART, this reporting delivers most of the necessary information to make an educated decision about where to obtain assisted reproductive services. Reading a magazine ad, obtaining a referral from a friend or reading Yelp reviews are all inferior to these CDC reports. Why is this level of data the exception rather than the rule in healthcare? First, the outcome can be ascertained within a year of starting the process. Second, most of the variables known to affect the ART success rate are included in the CDC reports. Finally, the federal government mandated the annual reporting of each clinic’s success rates.
In April 2016, Steve Findley, an editor for Consumer Reports, described the state of provider scorecards as highly variable, inconsistent, overly complex without providing context surrounding the information presented, and a focus on survival rates instead of patient experience, complication rates and rates of poor care. He referred to Hibbard et al.‘s work demonstrating over 1400 employed adults could distinguish lower-cost, higher-quality providers when presented with easy-to-understand quality data. Among his many recommendations to improve provider ratings and report cards, he suggested: using “teachable” moments (e.g., choosing a surgeon) to build report card awareness; measure clinical results that consumers and patients can understand and use, such as complication and outcome measures for surgical procedures; move aggressively to update provider rating data every year; explain in plain language data sources, methods, caveats, and limitations and the appropriate use of provider ratings and comparisons; minimize cognitive burden and help consumers process and synthesize information; make report cards understandable to consumers with low literacy and numeracy skills (>50% of all consumers); and make public reports fully compatible with and accessible via mobile device software.
Can we apply these learnings in another area in health care? Hip and knee replacements are the most common inpatient surgery for Medicare beneficiaries with inpatient costs for these two procedures exceeding seven billion dollars in 2014. In August 2018, the Lewin group published Medicare’s experience with the Comprehensive Care for Joint Replacement (CJR) Model, the mandatory experiment in 67 geographic areas testing the value of a 90-day bundle for hip and knee replacements with or without a fracture. In the first year, hospitals randomized to the CJR model had a greater decrease in episode payments (6.5% decrease) compared to controls (3.2% decrease). The CJR model did not increase unplanned readmissions, emergency department use, complications (acute myocardial infarction, pneumonia, sepsis/septicemia/shock within seven days; surgical site bleeding, pulmonary embolism, or death within 30 days; mechanical complications or periprosthetic joint infection/wound infection within 90 days) or all-cause mortality. In December 2018, Medicare issued a rule to allow hospitals to stop participating in CJR.
Although Medicare’s list of complications are important, many patients might prefer to track improvement in pain and functional status instead. In addition to tracking complications and the patient’s experience of hospital care (CAHPS Hospital Survey), CJR allows hospitals to submit patient-reported outcome data on a voluntary basis. The hospital is expected to measure the patient’s health status and disease burden (e.g., Veterans RAND 12, EQ-5D) and joint-specific symptoms (e.g., HOOS-PS, KOOS-PS, joint pain) up to three months before the procedure for elective cases and between 9 and 12 months after the procedure. The KOOS-PS ranges from zero to 100. Singh et al. estimated the minimally clinically important difference and moderate improvement thresholds for the KOOS-PS were -2.2 and -15, respectively. The International Consortium for Health Outcomes Measurement suggests surveying patients annually after a joint replacement. Patient-reported outcomes could also be supplemented with remote patient monitoring devices like pedometers to track changes in daily activity.
Although osteoarthritis of the hip and knee are not are as well-circumscribed as a cycle of assisted reproductive technology, we could work harder to collect patient perceptions of the pain and mobility before and after surgery in addition to their hospital experience and complications to determine higher-performing surgeons and teams to provide more transparency to employers and payers interested in encouraging higher-value healthcare. As we migrate to more patient-centered decision making to drive care, the combination of tracking symptoms from the patient’s perspective and developing methods to present comparative data among therapeutic options (e.g., surgeons) will become increasingly important.