home email us! sindicaci;ón

Does Disease Management Work Yet?

by Scott MacStravic

When the federal Congressional Budget Office published its report on DM in 2004, it concluded that it looked like some particular DM programs aimed at some diseases did save money, but that the evidence overall was mixed.  Since it had employed a “scientific” standard for its evaluation, it acted as if all DM programs are alike, hence many vendors’ approaches were combined, as long as they dealt with the same disease.

This is fine in controlled clinical trials, where the same disease and one specific treatment are involved, even if delivered by many providers.  But it makes no sense at all when there are entirely different vendors or other healthcare organizations, each with its own approach, including DM methods and costs.  The “mixed results” that CBO reported were pretty much what would expect from a diverse set of providers, particularly when only Medicare beneficiaries and sickness cost reductions were involved.

Despite the 2004 findings, Medicare has gone ahead with DM testing, in more or less the same approach, using various providers and different approaches in its “Medicare Health Support” demonstration project.  A recent article noted the mixed results from this project as well, concluding that a different approach was needed. [M. Quilty & W. Todd “Next Generation Disease Management: Back to the Future” Health Leaders News Apr 12, 2007 (www.healthleadersmedia.com)]

But mixed results do not mean no results, and if even one program dealing with one disease saves the kind of money that Medicare wanted, a strong argument can be made that that proves DM’s potential.

By including a wide variety of different providers and approaches, any finding of mixed results only means that not all DM programs work, where the fact that they are mixed shows that some programs do work.  So logic would suggest that the programs that work be embraced and only the ones that do not work ignored.

Moreover, it has been generally the case in any studies that examined DM and other proactive health management efforts over time, that savings tend to improve with time, that results may be much greater in the second and third year, for example, than in the first.   Modest results in the first year should not be seen as failure, but tracked for more years before any conclusion is reached.  By selecting the programs that work and following them longer, Medicare might find that DM can work quite well, thank you, once its variability is recognized.

GlaxoSmithKline, for example, working with its own employees’ health, found that proactive health efforts saved only $233 per participating employee in combined healthcare and disability cost reductions among 6049 employees continuously employed 1996-2000 in its first year.  But savings rose to $375 in the second year, $944 in the third and $950 in fourth.  Savings averaged $613 per participant over four years, almost three times the first year savings level. [G. Stave “Quantifiable Impact of the Contract for Health and Wellness” JOEM 45:2 Feb 2003 109-117] Had the program ended after one year, it would have missed out on almost $14 million in savings.

Given the traditional short-sightedness of governments pressed by necessities related to this year’s budget, it is not surprising that early results are often taken to mean far more than they should, and that the pluses in mixed results are overlooked.  But it might be hoped that this time, Medicare might wait till the full three years of the demonstration project are complete, and consider the possibility that even one or a few successes is actually a good result.

1 Comment »

  Warren Todd wrote @ April 18th, 2007 at 6:33 pm

I concur with Scott that we must not over react to preliminary findings from all the Medicare demos. I made this same “patience” speach back when the early Medicaid pilots on Florida were not going well and were getting a lot of bad press.

Notwithstanding, my concern is not so much with the actual negative results now being reported for some of these programs but with the fact the early results , as reported in the weekly IDMA DM World e-Report [http://www.dmalliance.org/index.php?page=report] are not providing the type of outcomes analysis needed to help develop better models. This, to me is a real waste of our taxpayer dollars, i.e., merely know whether these programs “work”or “don’t work” is not enought. We need to be designing our outcomes studies to tell us “what works” and “works does not work” or “what works less well.” With all the funds allocated to “measuring” the MHS, pilots, the Medicare Coordinated Care programs, etc. I would hope that we would have more answers and some better direction on how to improve our programs.

Your comment

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>