• Periop Proceedings

    This blog is intended to encourage open dialogue and learning amongst members and interested parties of the perioperative and periprocedural communities for the purpose of envisioning and encouraging higher performing systems.

  • About Our Bloggers

    Dan Krupka

    Dennis Fowler

    Warren Sandberg

  • Print this page

You are here: Twin Peaks Group >

Blog Archives

Enhancing the process for tracking implanted devices

Posted at 11:59 am on May 21, 2017 by

Most of us would find it strange that insurance claims include data on medications administered to the patient, but none for implanted medical devices, such as pacemakers, stents and artificial joints. For medications, the data take the form of codes that follow a prescribed protocol and facilitate monitoring for the drugs’ safety. For implanted devices, the claim forms sent to insurance companies list the procedure without any data on the device. And yet, such devices may pose safety problems because it is difficult to test them as thoroughly as medications.

A few years ago, the US Food and Drug Administration (FDA) developed a protocol for coding implanted devices, and mandated that manufacturers apply such codes – known as Unique Device Identifiers (UDIs) – on their devices or their packaging. The specificity of these UDIs lies somewhere between the vehicle number identification for an automobile (which is specific to an individual vehicle) and the barcode on a can of Coke (which may indicate its category and the volume of the can). Although manufacturers of high risk implantable devices are complying with the regulation, the FDA’s authority does not include requiring the use of UDIs in hospitals’ electronic health records or on claims. And there things stand.

Yet many have recognized the value of including UDIs on claims. First, insurance claims are universal, based on standards developed by the so-called X12 Committee, which developed the protocol for placing medication codes on the claim form. Second, including device identifiers on the claim would provide stakeholders, including patients, physicians, manufacturers, insurance companies and researchers access to better data on the performance of implanted devices over a long period of time. Third, in case of problems, patients could be tracked down and alerted even if they no longer live near the hospital where they received the implant or are covered by another insurance company.

In absence of an authority with the mandate to require the full-stream use of UDIs, initiatives are being taken to facilitate the inclusion of the device identifiers on claims, and demonstration projects are underway to demonstrate its feasibility. The X12 is facilitating the use of UDIs on claims: it has developed a proposal for a field on the claim form that would accommodate UDIs for up to eight implanted devices. On the hospital front, projects are underway to demonstrate the feasibility of moving UDI data from materials management systems to the electronic health record and on to device registries, which are repositories of data generally established by medical specialists. And last year, the Patient-Centered Outcomes Research Institute awarded a contract to demonstrate the feasibility of capturing the identifier for an implanted device at the point of use and transmitting it to an insurance company. The project is led by Joel Weissman, Deputy Director and Chief Scientific Officer of the Center for Surgery and Public Health at Brigham and Women’s Hospital in Boston, and I am a Co-Investigator.

Our project, which officially started in November 2016, has advanced to the point where we are confident that the process we envision will indeed work without interfering with established processes, and we are preparing for test runs. Our initial efforts are aimed at the Catheterization Laboratory (Cath Lab) of Brigham and Women’s Hospital, where cardio stents and valves are implanted. We had the good fortune to learn that UDI barcodes were already being scanned in the Cath Lab and captured into the electronic health record. However, nothing was being done with the data. The remaining challenges thus consisted of transmitting the UDI from the electronic health record into the hospital’s billing system and, from there onto a claim to be transmitted to an Insurance company.

We have just released a white paper describing the status in the Cath Lab and the plans for transmitting the UDI from there to Blue Cross Blue Shield of Massachusetts, our partner in this project. In addition to providing details on our process, the document includes background on UDIs and other projects aimed at their implementation in hospitals. I encourage you to read the white paper, particularly if you are interested in some of the details associated with the process of getting UDIs on to claims.

Posted in * View all Blog Posts, Unique Device Identifier | Tagged , Leave a comment

The charade of hospital rankings – Revisited

Posted at 5:32 pm on Mar 31, 2015 by

In August 2013, inspired by an article by Elizabeth Rosenthal in the New York Times, I wrote about the charade of hospital rankings. Ms. Rosenthal had reported that Dr. Nicholas Osborne of the University of Michigan had found a large discordance (a fancy technical term for lack of agreement) between the rankings published by U.S. News & World Report and HealthGrades. My point was more fundamental: I suggested that the rankings were not helpful to a person seeking care for a particular condition because they were not condition- or speciality-specific.

A broader discordance among ratings is reported in the March 2015 issue of HeHA March 2015alth Affairs. The authors include researchers well-known for their work in health care quality and safety. Having analyzed the ratings produced by Consumer Reports, HealthGrades, the Leapfrog Group and U.S. News & World Report, the authors reach a conclusion elegantly summarized by the title: “National Hospital Ratings Systems Share Few Common Scores And May Generate Confusion Instead Of Clarity.” Exhibit 1, illustrates some important differences.

Exhibit 1 – Hospital rating systems used by four organizations

Rating organizationFocus of ratingsScoring system
Consumer ReportsSafety – “hospital’s commitment to the safety of their patients.”0 – 100 scale
HealthGradesConsistent performance on risk-adjusted mortality and complication rates for 27 conditions and proceduresList of Top 50 and Top 100 hospitals (no ranking; no list of low performers)
Leapfrog GroupSafety – freedom from harmLetter grade (A – F)
U.S. News & World ReportBest medical centers for the most difficult patients based on 16 specialties, of which 12 are “data-driven,” not reputation driven0 – 100 for hospital and each rated specialty

It is evident that just differences among the scales – never mind the focus of the ratings and associated details – complicate the process of making comparisons. The challenge is compounded by differences in criteria for selecting hospitals to be rated. When all is said and done, only 83 hospitals were rated by all four organizations. The absence of agreement was striking: No hospital was rated as a top performer on the four lists, and only three hospitals appeared among the top performers if only three lists were compared. No hospital was rated as a low performer on the three lists that rate a full range of performance.

The authors conclude that “[w]hile the lack of agreement among these rating systems is largely explained by their different foci and measures, these differences are likely unclear to most stakeholders. The complexity and opacity of the ratings is likely to cause confusion instead of driving patients and purchasers to higher-quality, safer care.” They suggest that these organizations should help the public and other stakeholders interpret their results and provide detailed information regarding the methods used to arrive at the ratings.

Although it is difficult to argue with these suggestions, I believe it is futile to create ratings at the hospital level. It does not help to answer the question I raised in my previous blog on ratings: “where [do I] find the best care in my region for my condition, covered, of course, by my insurance plan?” Moreover, it doesn’t help hospitals to target improvement programs.

I believe that all stakeholders will remain ill served by the majority of current rating methods until medical specialties screw up their courage to develop meaningful ratings. Because such ratings could be applied uniformly to all hospitals, useful comparisons could be drawn. Sadly, no specialty has followed the example of the Society of Thoracic Surgeons.

Perhaps it’s time for the prominent authors of the article and other leaders in health care quality and safety to use their influence to mobilize specialty societies to develop rating systems. Perhaps they could also encourage state regulators to apply pressure to specialties’ local chapters.

Posted in * View all Blog Posts, Quality and safety | Tagged , Leave a comment

The true cost of robotic hysterectomy

Posted at 2:18 pm on Aug 08, 2014 by

 

Along with all the hype about robotic surgery, we finally get some credible data and a useful proposal.

The data

In the February 2013 issue of the Journal of the American Medical Association, Jason Wright and his colleagues from Columbia University compared the outcomes of performing robotically assisted benign hysterectomies with other methods.[1] The most meaningful comparison is with laparoscopic surgery performed without the assistance of a robot. They found that there was no significant advantage to performing the procedure with the assistance of a robot, but that the method carried an additional (incremental) cost of about $2,000. Despite this apparent disadvantage, the authors report that the adoption of robotically-assisted surgery for benign hysterectomies is rising more rapidly than “standard” laparoscopic surgery.

The editorial

The article was accompanied by an editorial,[2] drawing attention to the impact of direct-to-consumer marketing of robotic surgery and its likely role in driving demand, thus fueling “unnecessary utilization.” The authors of the editorial, Joel Weissman and Michael Zinner from Brigham and Women’s Hospital in Boston, went further: They pointed out that, since reimbursement for laparoscopic benign hysterectomy is the same whether robotically assisted or not, patients and hospitals do not have an incentive to use the less expensive option. To stimulate the right behavior – in this particular case, where the outcomes are effectively the same – Weissman and Zinner suggest that, when the patient requests robotically assisted surgery, a copayment equal to the additional cost be imposed. If physicians and hospitals are driving the demand, they suggest that they be asked to justify their recommendation. (In the interest of brevity, I’ve simplified things a bit; for the details, please consult the editorial).

The opportunity cost

Although it was not mentioned in the article by Wright and his colleagues, robotically assisted benign hysterectomy consumes more OR time than the “standard” laparoscopic method. Thus, when estimating the copayment, a premium should be added to account for the opportunity cost associated with reimbursement that would have been received by fitting a greater number of standard laparoscopic procedures into the available OR time.

The estimate of the opportunity cost

How large should this premium be? In our paper, “Calculating the true cost of robotic hysterectomy,” which appears in the August issue of Healthcare Financial Management, Vikram Tiwari, Warren Sandberg and I provide the answer. We employ a simulation model to estimate the number of robotically assisted and standard laparoscopic procedures that can be performed in one month in a dedicated OR, and then calculate the resulting difference in a hospital’s cash flow. For a range of reasonable values of the important parameters, we find that, when the premium is taken into account, the appropriate copayment could be at least three times as large as proposed by Weissman and Zinner. As we point out in our article, the total copayment represents 7 to 9 percent of the median income of a family of four in the United States. Translation: It’s a big deal!

If transparency were introduced, with prospective patients regularly presented with outcome data for the two non-invasive methods along with what they would have to pay for the privilege of the robotically assisted method, it is likely that a majority of patients would forgo the robot.

The lesson

So, what’s the really big lesson here? We need more transparency; more researchers like Jason Wright and his colleagues to collect and carefully analyze masses of data; and more highly regarded influencers like Joel Weissman and Michael Zinner to call ‘em as they sees ’em.


 

[1] Wright, J.D., Ananth C.V., Lewin, S.N., et al., “Robotically Assisted Vs. Laparoscopic Hysterectomy Among Women with Benign Gynecologic Disease,” Journal of the American Medical Association, February 2013.

[2] Weissman, J.S., and Zinner M., “Comparative Effectiveness Research on Robotic Surgery,” Journal of the American Medical Association,February 2013.

Posted in * View all Blog Posts, Health care economics, Perioperative Systems-related, Quality and safety, Robotic surgery | Tagged , , Leave a comment

Competing on outcomes: Stirrings and supporting evidence – and a call for leadership

Posted at 11:00 am on May 02, 2014 by

I have been an advocate of competition in health care because I believe it promises to improve health care value (outcomes delivered per dollar spent). In fact, Warren Sandberg and I described one pathway to advancing the concept in a blog that appeared in Health Affairs in September 2013. In it, we cited events (e.g. publication of ratings of heart bypass surgery) that might have stimulated competition on outcomes but didn’t. We went on to propose a solution, which we call Facilitated Quality Competition, and a trial of our proposal. In the special case of heart bypass surgery, we proposed that state regulators steer heart bypass patients to services that had been awarded two or three stars for that procedure from the Society of Thoracic Surgeons.
Early this year, I decided to get the ball rolling: I contacted the regulator in one state to propose the trial. Surprise! Surprise! I was told that all hospitals that perform heart bypass surgery in that state had received either two or three stars. Now, I’m waiting to hear whether this gratifying response is the result of policy or accident. And there the matter stands.
While awaiting the response, I discovered that the Boston Consulting Group (BCG) had published a white paper advocating competition based on outcomes. It cites instances of authorities’ steering patients to higher performing providers and an example of an institution that has prospered as a result of advertising its credible outcomes data. However, the authors admit that “[a]s of yet, no national health system is explicitly designed for competing on outcomes.”
Nevertheless, some progress is worth celebrating. So, in the following, I summarize two examples presented in the BCG white paper in which patients are steered to high-quality services and one example of an institution that posts its outcomes, which have been reported in the peer-reviewed literature.
In Sweden, the Stockholm county council is steering patients with ST-elevated acute myocardial infarctions from a well-known and highly regarded hospital to a different institution because data had shown that patients at the latter had a higher survival rate. Several years ago the council instituted a system of bundled payments, with a bonus for superior performance, for hip and knee replacement surgery. The policy resulted in a 20% reduction in complications and revisions compared to a control group, and the cost per patient of these surgeries has declined “by an equivalent amount.” Now, other counties in Sweden are planning to adopt the practice pioneered by the Stockholm county council. In effect, some of Sweden’s payers are experimenting with outcomes-based competition, and learning that they and the patients benefit.
In Germany, the Martini-Klinik, a private hospital specializing in the treatment of prostate cancer, has used detailed data to achieve superior results on two important outcome measures: severe erectile dysfunction and urinary incontinence. As a consequence, its volume of prostate cancer surgeries has grown by about 16% per year for the past eight years, and it now performs the highest number prostate cancer surgeries in the world. The Martini-Klinik posts its outcomes data on its web site. However, this is not just any data: It has passed the test of being published in The Journal of Urology.
In the U.S., Walmart is steering employees requiring transplants or heart or spine surgery to six leading institutions rather than to the patients’ local hospitals. According to the New York Times article that reported the adoption of this policy, Walmart believes that its employees will receive better care at these centers on account of their patient volumes, and it will benefit from lower costs. The BCG paper and the Times article do not describe the data accessed by Walmart in selecting the six centers.
The three examples suggest that health care value is improved when credible data are publicly available and are used to steer or attract patients to the centers delivering the best outcomes.
What’s still needed in the U.S. is (1) a policy of encouraging outcomes-based competition and (2) publicly available outcomes metrics for common procedures, developed, published and and supported by medical societies. Let’s hope that influential and courageous health care leaders will take a stand on this very pragmatic pathway to improve health care quality.

Posted in * View all Blog Posts, Quality and safety | Tagged , Leave a comment

Consumer Reports shoots itself in the foot

Posted at 1:39 pm on Sep 30, 2013 by

Just after I referred to hospital rankings as a charade, Consumer Reports jumped into the fray with ratings of 2,463 hospitals and specific ratings for 5 common procedures: knee replacement, hip replacement, back surgery, coronary angioplasty and carotid artery surgery (Consumer Reports, September 2013, p. 31-41).  The ratings are based on billing claims for Medicare patients, they are risk-adjusted, and use death of a patient undergoing elective surgery and extended length of stay as quality measures for 27 different procedures.

I am deeply disappointed.  Here’s why: In September 2010, Consumer Reports took a giant step forward by making public the ratings for Coronary Artery Bypass Grafting (CABG) developed by the Society of Thoracic Surgeons (STS).  This was correctly referred to as a watershed event because a medical society, not a third party, had developed the rating.  The STS had come up with a three-star index for service/hospital combinations using clinical, not administrative, data.  The index was based largely on outcomes – risk-adjusted mortality and morbidity – with a component that takes account of conformance to best practice.   By contrast, Consumer Reports now takes a step back by (1) rating hospitals rather than a service at a hospital; (2) using administrative data rather than clinical data; and (3) coming up with ratings not created by clinicians themselves.  Why would any intelligent patient requiring a particular procedure rely on the Consumer Reports rating when making an important decision?

To give Consumer Reports its due, the article includes some excellent points:

  • They’re open with their objectives: to shed light on hospital quality and to push for greater transparency.
  • They observe that current bad incentives provide little reason for hospitals to improve.
  • They indirectly criticize NSQIP for not allowing its ratings to be made public.
  • They acknowledge that hospitals can excel on one procedure but not so well on others.  For example, they mention that the Massachusetts General Hospital excels at CABG, but performs less well on hip replacement.
  • They commend STS for making its ratings public.

Many readers who examine the Consumer reports ratings will question the methodology behind the ratings because many hospitals with excellent reputations, e.g., Memorial Sloan-Kettering Cancer Center, Johns Hopkins Hospital, the Cleveland Clinic Foundation, Mayo Clinic – St. Mary’s Hospital, and Virginia Mason Medical Center, receive an “average” rating.  However, Consumer Reports does not even report whether it tried to calibrate its ratings for procedures by comparing its scores for CABG with those created by the STS.

In summary, I am saddened that Consumer Reports, which is highly regarded for its product ratings, could have – in time – developed credibility in health care ratings.  Instead, it jumped the gun and shot itself in the foot.

Posted in * View all Blog Posts, Quality and safety | Tagged , Leave a comment
  • © Copyright 2023 Twin Peaks Group, LLC
  • All Rights Reserved.