Authors

  1. Simone, Joseph V. MD

Article Content

How do we know that one cancer center is "better" than any other? If we had a cancer diagnosis, should an institution's ranking in U.S. News & World Report be our standard? U.S. News has virtually taken over the evaluation of cancer programs and centers, as well as universities and hospitals. The ranking is highly publicized and institutions publicize their own ranks, if at a high level. In my opinion, though, such rankings depend too much on the opinion of deans and other academic types, most of whom have no first-hand knowledge of what makes a cancer center good or bad today (i.e., not 10 years ago), or they focus only on scientific eminence, personal relationships, or a long-standing reputation.

  
JOSEPH V. SIMONE, MD... - Click to enlarge in new windowJOSEPH V. SIMONE, MD. JOSEPH V. SIMONE, MD, has had leadership roles at St. Jude Children's Research Hospital, Huntsman Cancer Institute, Memorial Sloan Kettering Cancer Center, the University of Florida Shands Cancer Center, the National Comprehensive Cancer Network, and the National Cancer Policy Board, and has served on the NCI's Board of Scientific Advisors.He has been writing this award-winning column since 2003, and welcomes comments and suggestions, as well as for his blog on career development for medical professionals (

The result is basically the same old lineup, with small changes back and forth each year. There are centers that are ranked near the top that would never be there if they were they evaluated by a seasoned, knowledgeable group of cancer center leaders. And there are centers that are lower in the ranking that I would gladly choose if I or a loved one got cancer.

 

But the most important failing of such competitive lists is the fact that the evaluations do not measure the quality, efficiency, and value of cancer care, which the public (and most doctors) care about far more than academic reputation. The public and the medical community are often impressed and deceived by terms such as "NCI-designated" or "comprehensive," which have nothing to do with the quality and efficiency of the care of cancer patients.

 

So what should be done? I believe the cancer center community should take control of the conversation on this issue. The cancer center community should take on the task of developing a realistic, broad-based set of standards for measuring cancer center performance in patient care, efficiency of care, clinical outcomes, and the relative cost of care. This is not an easy task, but it is a critical factor in understanding how good a cancer center is in practice, not in theory.

 

The cancer research of NCI-designated cancer centers (currently 60 that do research and also care for cancer patients) is evaluated by the NCI after they have passed the test of being included in that elite group. The NCI funds such centers to assure a strong research infrastructure that includes an assessment of the cancer research effort of an institution based on stringent guidelines and a peer review process. The NCI reviews each center every five years to ensure the consistency of research excellence. It also grades centers on how well they bring research findings to their cancer patients in the clinic.

 

So one may take the success of an institution's NCI review process, which regularly grades each center's research program, as ample evidence of the high quality of research. As good as it is, however, this process does not measure the quality, efficiency, and value of cancer care.

 

Complex and Difficult for Several Reasons

Measuring the quality of cancer care is complex and difficult for several reasons. Patients with the same diagnosis vary a great deal. Cancers have many subtypes, and the physical and psychological constitution of patients also varies. Even the culture that the patient lives in can determine how soon he or she seeks medical help, a critical factor in outcome, and patients' socioeconomic group may have a major impact on the outcome of treatment.

 

Also, doctors vary in skills and knowledge, and in the face of evidence of better therapy, some are slow to change to the better therapy and continue therapy they have given for a long time and are thus comfortable with. There are many other confounding factors that must be accounted for, but that is a bit less of a problem today as medical information is increasingly digitized and thus more accessible and manageable.

 

AACI?

The Association of American Cancer Institutes, with members from many cancer centers, would be a good candidate for leading such a project. It has access to all the cancer centers and their expertise and is a well-established organization in the cancer community.

 

Experts would develop the model and measures with complete transparency of how measures were chosen and applied. Public information can be gathered independent of any one organization-e.g., the number of cancer research grants, Cancer Center Support Grant scores, Joint Commission ratings, and hospital evaluations by independent entities. Some measures would need to be developed-for example, clinical outcomes, quality measures, patient satisfaction measured in such a way as to get better data (not Press-Ganey which I believe is flawed because every single hospital I have visited over decades claims a 90+ percent approval rating). Instead of a ranking of 1, 2, 3, I would have three or four categories like "outstanding," "excellent," "very good, and "needs improvement." Ultimately, the group that collects and analyzes the measures could offer a service to guide a low scoring center to improvement.

 

The data and results from such an effort would belong to the participating cancer centers. Each cancer center would receive a report of how well it performed compared with the other (anonymous) centers.

 

There would certainly be some cost involved, but volunteers could do a lot of the initial spadework-e.g., health services research faculty. Done right, this could be a source of revenue to help cover the costs of the program. For example, a cancer center that wished to be evaluated would pay a fee to cover the cost of site visits, data collection, etc.

 

The program, run by representatives of cancer centers and other experts, would eventually end up being the arbiter of what a program of excellent cancer care, research, and training should look like. Also, some community hospital systems may wish to be evaluated; they could have their own category (no lab research), and pay for the process.

 

Potential Problems

There are potential problems, of course:

 

* Many in our profession often reject anything new out of hand;

 

* Conflicts of interest would need to be assiduously avoided; and

 

* Accepting support from commercial entities like pharmaceutical companies risks a loss of credibility by academic and other institutions.

 

 

Nonetheless, I believe this approach should be considered with oncologists leading the pack to develop a system that can honestly advise patients and referring doctors of the quality of cancer care at a cancer center.