Right Care, Right Patient, Right Time: The Role of Comparative Effectiveness Research

April 17, 2019

Public Briefing

It’s no secret that the practice of medicine is complex. Comparative effectiveness research (CER) has offered an approach to evaluate the outcomes of different health care methods and identify which treatments matter most to patients. Comparative effectiveness research has the potential to inform broader health care policy conversations on value, costs, and delivery system reform. This briefing will inform attendees about the purpose and perspectives surrounding comparative effectiveness research, including how researchers conduct CER studies and how various stakeholders may utilize the results. Panelists will also explore the current CER policy landscape.


  • Alfiee M. Breland-Noble, Ph.D., MHSc, Project Director, African American Knowledge Optimized for Mindfully-Healthy Adolescents (AAKOMA), Center for Trauma and the Community, Georgetown Medical Center
  • John Bulger, D.O., MBA, Chief Medical Officer, Geisinger Health Plan
  • Eleanor Perfetto, M.S., Ph.D., Senior Vice President, Strategic Initiatives, National Health Council
  • Sean Tunis, M.D., M.Sc., Founder and Senior Strategic Advisor, Center for Medical Technology Policy
  • Gail R. Wilensky, Ph.D., Senior Fellow, Project HOPE

Join the conversation on Twitter using the hashtag #AllHealthLive 


This event was made possible by a Patient-Centered Outcomes Research Institute
Eugene Washington PCORI Engagement Award.

Event Resources

Event Resources

All materials can be found in full at the links provided.

Key Resources (listed chronologically, beginning with the most recent)

  • “Highlights of PCORI-Funded Research Results.” Patient-Centered Outcomes Research Institute. February 2019. Available at http://allh.us/XqhR.
  • “Enhanced Cure Rates for HCV: Geisinger’s Approach.” Khurana, S., Gaines, S., Lee, T. NEJM Catalyst. July 11, 2018. Available at http://allh.us/XxYM.
  • “The ICER Value Framework: Integrating Cost Effectiveness and Affordability in the Assessment of Health Care Value.” Pearson, S. Value in Health. March 2018. Available at http://allh.us/dTQV.
  • “Improving the Relevance and Consistency of Outcomes in Comparative Effectiveness Research.” Tunis, S., Clarke, M., Gorst, S., et al. Journal of Comparative Effectiveness Research. March 2016. Available at http://allh.us/UNnV.
  • “Supporting Better Physician Decisions at the Point of Care: What Payers and Purchasers Can Do.” Contreary, K., Rich, E., Collins, A., et al. Mathematica Policy Research. February 10, 2016. Available at http://allh.us/U98D.
  • “Comparative Effectiveness and Cost-Effectiveness Analyses Frequently Agree on Value.” Glick, H., McElligott, S., Perfetto, E., et al. Health Affairs. May 2015. Available at http://allh.us/yDKr.
  • “Comparative Effectiveness Research.” Donnelly, J. Health Affairs. October 8, 2010. Available at http://allh.us/kJNu.
  • “The Policies and Politics of Creating a Comparative Clinical Effectiveness Research Center.” Wilensky, G. Health Affairs. July/August 2009. Available at http://allh.us/qBVy.
  • “Comparative Effectiveness Research: The Need for a Uniform Standard.” Gottlieb, S., Klasmeier, C. American Enterprise Institute. June 2009. Available at http://allh.us/PyhU.


Additional Resources (listed chronologically, beginning with the most recent)

  • “Choosing Wisely Campaigns: A Work in Progress.” Levinson, W., Born, K., Wolfson, D. May 15, 2018. Available at http://allh.us/Bmdh.
  • “Policy Strategies for Aligning Price and Value for Brand-Name Pharmaceuticals.” Pearson, S. Nichols, L, Chandra, A. Health Affairs. March 15, 2018. Available at http://allh.us/wERQ.
  • “Increasing Uptake of Comparative Effectiveness and Patient-Centered Outcomes Research among Stakeholders: Insights from Conference Discussion.” Law, E., Harrington, R., Alexander, G., et al. Journal of Comparative Effectiveness Research. February 2018. Available at http://allh.us/mvAX.
  • “Exploring Patient-Centered Comparative Effectiveness Research.” Martin, H. Bipartisan Policy Center (blog). June 2, 2017. Available at http://allh.us/brfc.
  • “Making Genomic Medicine Evidence-Based and Patient-Centered: A Structured Review and Landscape Analysis of Comparative Effectiveness Research.” Phillips, K., Deverka, P., Tunis, S., et al. Genetics in Medicine. April 13, 2017. Available at http://allh.us/KxX7.
  • “Distinguishing Selection Bias and Confounding Bias in Comparative Effectiveness Research.” Haneuse, S. Med Care. April 2016. Available at http://allh.us/wevM.
  • “Developing Evidence That Is Fit for Purpose: A Framework for Payer and Research Dialogue.” Sabharwal, R., Graff, J., Holve, E., et al. The American Journal of Managed Care. September 17, 2015. Available at http://allh.us/pNAJ.
  • “Comparative Effectiveness Research through a Collaborative Electronic Reporting Consortium.” Fiks, A., Grundmeier, R., Steffes, J., et al. July 2015. Available at http://allh.us/64xg.
  • “Clinical Comparative Effectiveness Research through the Lens of Healthcare Decisionmakers.” Price-Haywood, E. The Ochsner Journal. Summer 2015. Available at http://allh.us/c3ky.
  • “Stakeholder Participation in Comparative Effectiveness Research: Defining a Framework for Effective Engagement.” Deverka, P., Lavallee, D., Tunis, S., et al. Journal of Comparative Effectiveness Research. March 2012. Available at http://allh.us/RmdW.
  • “How Best to Engage Patients, Doctors, and Other Stakeholders in Designing Comparative Effectiveness Studies.” Hoffman, A., Montgomery, R., Tunis, S., et al. Health Affairs. October 2010. Available at http://allh.us/wWG8.
  • “The Political Fight Over Comparative Effectiveness Research.” Igleheart, J. Health Affairs. October 2010. Available at http://allh.us/J8yP.
  • “How Will Comparative Effectiveness Research Affect the Quality of Health Care?” Docteur, E., Berenson, R. Urban Institute. February 2010. Available at http://allh.us/Ddwu.
  • “Rethinking Randomized Clinical Trials for Comparative Effectiveness Research: The Need for Transformational Change.” Luce, B., Kramer, J., Tunis S., et al. Annals of Internal Medicine. August 4, 2009. Available at http://allh.us/9nvJ.
  • “Comparative Effectiveness Research: Medical Practice, Payments, and Politics: The Need to Retain Standards of Medical Research.” Selker, H. Journal of General Internal Medicine. June 2009. Available at http://allh.us/A4CX.
  • “Implementing Comparative Effectiveness Research: Priorities, Methods, and Impact.” The Brookings Institution. June 2009. Available at http://allh.us/Kakw.
  • “Comparative Effectiveness Research and Evidence-Based Health Policy: Experience from Four Countries.” Chalkidou, K., Tunis, S., Lopert, R., et al. The Milbank Quarterly. June 2009. Available at http://allh.us/6Fky.
  • “Developing a Center for Comparative Effectiveness Information.” Wilensky, G. Health Affairs. November 2006. Available at http://allh.us/JBMt.



Alfiee M. Breland-Noble


Georgetown University Medical Center, Project Director of AAKOMA at the Center for Trauma and the Community

202-687-4812   ab2892@georgetown.edu

John Bulger Geisinger Health Plan, Chief Medical Officer

                        JBulger@ geisinger.edu

Eleanor Perfetto


National Health Council, Senior Vice President of Strategic Initiatives

202-785-3910   eperfetto@nhcouncil.org

Sean Tunis Center for Medical Technology Policy, Founder and Senior Strategic Advisor

410-547-2687   Sean.tunis@cmtpnet.org

Gail Wilensky Project HOPE, Senior Fellow



Experts and Analysts

Joseph Antos American Enterprise Institute, Wilson H. Taylor Scholar in Health Care and Retirement Policy

202-862-5983   JAntos@aei.org

Eric B. Bass Johns Hopkins Evidence-based Practice Center, Director


Tanisha Carino FasterCures, Milken Institute, Executive Director

tcarino@fastercures.org   202-336-8900

Joey Mattingly University of Maryland School of Pharmacy, Assistant Professor in the Department of Pharmacy Practice and Science

410-706-8068   jmattingly@rx.umaryland.edu

Anand Parekh Bipartisan Policy Center, Chief Medical Advisor

aparekh@bipartisanpolicy.org   202-204-2400

Eugene Rich Mathematica Policy Research, Senior Fellow

202-250-3544   erich@mathematica-mpr.com

Karen A. Robinson Johns Hopkins Evidence-based Practice Center, Director


Morgan H. Romine Duke- Margolis Center for Health Policy, Policy Fellow in Strategic Engagement

202-621-2815   morgan.romine@duke.edu



Arlene Bierman Agency for Healthcare Research and Quality, Director, Center for Evidence and Practice Improvement 

301-427-1500   arlene.bierman@ahrq.hhs.gov

Carolyn Clancy Department of Veterans Affairs, Deputy Undersecretary for Health, Discovery, Education and Affiliated Networks

202-461-9121   carolyn.clancy@va.gov

Michael Lauer National Institutes of Health, Deputy Director for Extramural Research

301-496-1096   Michael.lauer@nih.gov

Janet Woodcock


Food and Drug Administration, Director of the Center for Drug Evaluation and Research

301-797-3200   Janet.Woodcock@fda.hhs.gov





Andrew Barnhill GlaxoSmithKline, Director of Federal Policy and HHS Strategy

​                        andrew.t.barnhill@gsk.com

Sarah Emond Institute for Clinical and Economic Review, Executive Vice President and Chief Operating Officer

617-528-4013   semond@icer-review.org

Sara van Geertruyden Partnership to Improve Patient Care, Executive Director

​202-688-0226   sara@pipcpatients.org

James Gelfand The ERISA Industry Committee, Senior Vice President of Health Policy

202-627-1922   jgelfand@eric.org

Greg Gierer America’s Health Insurance Plans, Senior Vice President of Policy

202-778-3200   ggierer@ahip.org

Jennifer Graff National Pharmaceutical Council, Vice President of Comparative
Effectiveness Research
Jennifer Jones Blue Cross Blue Shield Association, Legislative and Regulatory Policy Director


Chip Kahn Federation of American Hospitals, President and Chief Executive Officer

202-624-1534   ckahn@fah.org

R. Shawn Martin American Academy of Family Physicians, Senior Vice President of Advocacy, Practice Advancement and Policy

202-232-9033   smartin@aafp.org

Amy M. Miller Society for Women’s Health Research, Chief Executive Officer

202-223-8224   amiller@swhr.org


Please note: This is an unedited transcript. Note: This is an unedited transcript. For direct quotes, please see video at: http://allh.us/GNv4   SARAH DASH:   Good afternoon everybody.  Thank you so much for being here on this beautiful spring day.  We are delighted here at the Alliance to be hosting a briefing today to talk about Right Care, Right Patient, Right Time and the Role of Comparative Effectiveness and Patient-Centered Outcomes Research in helping to deliver the care that best aligns with patient’s values and preferences and clinical outcomes today.   So my name is Sarah Dash, I’m President and CEO of the Alliance for Health Policy.  How many people have been to an Alliance briefing before?  If I could just get — oh, you guys are great.  Thank you.  Thanks for coming back.  If you don’t know about us, we are a non-partisan, not-for-profit, really dedicated to advancing knowledge and understanding of all types of health policy issues and we are here as a resource for you all, so we are really glad you’re hear.   Before we get started, I would like to thank the Patient-Centered Outcomes Research Institute, PCORI, for their support of today’s briefing through a Eugene Washington PCORI Engagement Award.  You can follow along today’s discussion, if you like, on Twitter at the hashtag #allhealthlive and you can submit questions via Twitter as well.   We have an amazing panel today and they are going to tell us about how we use evidence to improve care, to — as I said — identify which treatments, therapies, modalities, may work better for different patients, and how we are learning about incorporating patient viewpoints and preferences into really kind of the whole continuum of care delivery.  We are going to explore the purpose and perspectives surrounding comparative effectiveness research, CER, patient-centered outcomes research, or PCOR, and other types of research.   So I’m going to go ahead and introduce our panelists without further ado.  You do have their full bios in your packets and then we’ll go ahead and get started.  So first, joining us today, all the way at the end of the table from me, is Eleanor Perfetto.  Eleanor is Executive Vice President of Strategic Initiatives at the National Health Council, where she conducts research and policy work on patient engagement and healthcare, including comparative effectiveness, patient-centered outcomes research, medical product development, value assessment, and healthcare quality.  We’re really grateful to have her here today to discuss why CER is important to patient groups and what they are looking for in research results and policies.   Next, we will hear from Gail Wilensky.  Gail is a Senior Fellow at Project Hope and should say, former administrator of the Healthcare Finance Administration under the first President Bush.  Her expertise is on strategies to reform healthcare in the United States with particular emphasis on Medicare, comparative effectiveness research, and military healthcare.  And she is going to tell us about the historical context of CER and how that brings us to today.   After Gail, we will hear from Sean Tunis, Founder and Senior Strategic Advisor at the Center for Medical Technology Policy.  Sean works to provide a neutral platform for multi-stakeholder collaborations that are focusing on improving the quality relevance, and efficiency of clinical research.  He will use his time to share how the culture of clinical research has shifted, to emphasize evidence and the patient voice, as well as to describe various methodologies.   We’ll then hear from Dr. Alfiee Breland-Noble, also known as Dr. Alfiee to her patients.  She’s Project Director of the African-American Knowledge Optimized for Mindfully Healthy Adolescence, housed within the Center for Trauma in the Community at Georgetown Medical Center.  So her research focuses on reducing mental health disparities and racially diverse adolescent youth and families and she’s going to discuss how a CER study is conducted, including how patients, especially those who are typically excluded from clinical research and trials are involved — or can be involved throughout the process.   Then finally we’re delighted to have with us John Bulger — Dr. John Bulger is Chief Medical Officer at Geisinger Health Plan.  He will explain how the health plan uses comparative effectiveness in practice with their patients and tell us about Geisinger’s experience with the proven care hepatitis C program as a case study.   So with that, thank you all so much for being here.  I’m going to turn it over to Gail to kick things off and I’m going to just hand this down — I’m sorry, to Eleanor to kick things off — I apologize, and that’s — just give that to Gail for when it’s her turn.  Let’s go.   ELEANOR PERFETTO:  Thank you.  I don’t have any slides, thanks.  So my job here today is to kind of kick things off with grounding everyone in the room with some background and some definitions.  And so the first thing I want to start off with is telling you a little bit about what the National Health Council is.  We are a not-for-profit membership organization here in Washington, D.C., and our membership is predominantly made up of patient advocacy organizations.  We have other organizations in membership such as not-for-profit groups that are interested in healthcare like faster cures for example.  We also have professional associations, we also have business and industry, so — pharmaceutical companies, health insurance companies.  But the largest group that we have in membership are the patient advocacy groups and I know that some of them are here today.   So we focus really on what’s important to patients through our patient advocacy group membership and they really drive the work that we do and our mission is to be supportive of them.  So to give some context for — we’re talking about comparative effectiveness research in patient-centered outcomes research and we’re really talking about two sides of the same coin.  They are different, but they are very related to one another.  So I wanted to start off talking about that, because we have had this evolution from what originally started off as conversations about comparative effectiveness research and have now evolved into comparative effectiveness research that’s really patient-centered and patient focused research.   So what is comparative effectiveness research?  Well, a very formal definition is that it is research that compares two or more things to one another to kind of figure out which one works best.  Only it has to be much more nuanced than that.  You know, we had some old conversations about which works best — the pink pill or the blue pill?  And we know that this is much more complicated question than which one of them works best.  It’s actually, which one of them works best for which patients, because sometimes the pink pill will work for some patients and not for others, and sometimes the blue pill will work for some patients and not for others.  So comparative effectiveness research is really focused on, can we figure out and can we drill down and understand while we avoid doing trial and error with patients, can we figure out which one will work best for which patients, and make our selections that way so that patients can be part of the conversation and their clinicians can be part of the conversation, knowing full well which one they expect to see work best for them.   The other side of that coin is patient-centered outcomes research, and that’s making sure that we have patients become more involved in the research process.  Not as study subjects.  We will always need to have patients be study subjects, but rather as partners in the research themselves.  And that really changes the way that we do research.  So it really gets to why patient-centered outcomes research is so important.  We had research that was predominantly dominated by the scientists and the researchers who were planning the protocols, and the questions were driven by those researchers.  The outcomes they were choosing were driven by those researchers.  And we had designs for the research that might have been pristine designs really getting to a very elegant research study, but in the end it wasn’t answering the questions that patients and families needed to be able to have to make the decisions for themselves.  And so when we have — when we talk about patient-centered outcomes research, we talk about questions that are important to patients.  It’s the questions that they need answers to, that will help them make choices.  We talk about the outcomes that are important to patients.   So when we look at the end points for those studies, are we zeroing in on the end points that patients really care about as opposed to the end points that the clinician or researcher might care about?  And then also patients being involved in informing the design of the study, so that there will actually be studies that patients will participate in.  Now, we want to avoid having too many drop-outs in studies, or a protocol that someone won’t sign up for because it sounds too onerous to them.  So that’s really what getting at patient-centered outcomes research — to get at comparative information, is all about.   So in summary, I want to say that what patients are really looking for is information to help them figure out what’s right for me, or for a family member when they’re making that decision.  They want to be able to sit down with their doctor and say, of the options I have, which one will work best for someone who looks like me, and for someone who lives in my circumstances and has the preferences that I have?  And I think we have to kind of stop now and think:  Are we there yet?  Because we’ve seen this big transition to go from not having patients engaged, now having patient centered outcomes research, but in the last ten years we’ve seen it happen.  We’ve seen that transition happen.  And with PCORI really spear-heading and leading a lot of this work, we’ve seen that transition come to the place where even organizations like the Food and Drug Administration are looking at patient-focused drug development, and really have taken leadership there to move this forward within the clinical research realm of new medical product development.  And so when you see those kinds of changes happening in the research culture, you know that we’ve actually seen quite a shift in the way that we do healthcare research.  I think one of the things that we have to all accept is that our work is not finished, because there’s so much work to be done.   And if we started off thinking ten years ago that we were going to be done in ten years with having all of this work done, we were quite naïve if we thought that.  We should have always known that this was going to take longer to do, because there is so many patient populations, so many diseases, so many very specific questions that we need to get the answers to.   But right now, I think one of the important things to take away is that patient-centered outcomes research and comparative effectiveness research are expected by patient populations because they still need that information in their decision making.   SARAH DASH:   Thank you so much, Eleanor.  Gail?   GAIL WILENSKY:   Thank you.  Pleasure to be here.  I wanted to put some of today’s discussion in a bit of a historical context.  Both how in the United States we began to get interested in the concept and how personally as a market-oriented economist, I began to become personally invested in the concept of comparative effectiveness research.  And it really had to do with the notion of how to enable both patients and physicians to be able to be in a position to make better decisions.   My own position on this came about realizing that we had — and have had over several decades, several challenges in healthcare.  One is that we’ve had spending growth rates that are unsustainable.  And while it slowed for a few years, it is still expected to grow at a faster rate than the economy.  And in addition to that, we have had both problems with patient safety and problems with quality.  So spending a lot and not always spending it as effectively as we might.  When you think about comparative effectiveness information, it’s important to think about it not as an end point, but rather as a basic building block, providing information on what works when for whom, provided by a particular type or place of healthcare provider.  This was a point that Eleanor raised.  It’s not an answer that is likely to be true for all people at all times and all places.  And also recognizing that technologies rarely are either always effective or never effective and it’s trying to understand again, for whom, when, under what circumstances, particular technologies might be effective.   If we look at where other countries have been on this process and frankly they have adopted it at an earlier point, we see that it is a very centralized process in most countries.  Literature review focus, or actually looking at experimental designs.  Usually by agencies that are part of the government, which is not surprising, since in most of these countries the government is a major part of the payer, if not a completely centralized system.  Whether or not the recommendations are mandatory, or the transparency of the process differs.  It was clear to me, in thinking about it, that the U.S. needed something different.  We need to figure out how to spend smarter.  If you’re not going to use direct regulatory control, you both need better information and ultimately better incentives.  And it meant focusing on conditions, rather than as they did in Europe, frequently focusing on a particular therapeutic or intervention.  And not just drugs and devices, but surgical procedures versus medical procedures.  Invest in what is not yet known and use what is known more effectively.  Recognizing it’s a very dynamic process.   The place to start has always seemed to me to look at the high-cost medical condition, where there is a lot of variation in treatment.  Conditions that are reflecting the highest class DRGs with substantial geographic variation, for example, is a good proxy.  And we also, because it’s so much to encourage that private funding goes on, but subject to agreed upon guidelines with results that can be audited, only because we need to recognizes that it’s very unlikely that the government alone, through PCORI or any other agency, is likely to be able to get all of the work done that needs to happen.   We also need to recognize that if we’re going to get good decisions, we are going to need to have data from very different sources.  We sometimes tend to think about the double-blinded, randomized control trial as the gold standard.  Actually, what it is, is the way to get rid of selection bias.  But it frequently introduces other bias or other deviations from what is likely to occur when something is actually used, because different kinds of providers or institutions are providing it, or because the patients are different from those that are in the RCTs.  Sean, to my left, has talked about real-world RCTs and I hope he’s continuing to talk about that.  Using epidemiological studies, administrative data, it’s important to understand that all data have limitations.  It was an invaluable lesson I learned as the co-director of the first of the big expenditure surveys, the National Medical Care Expenditure Survey, now called the Medical Expenditure Panel, is that all data has errors and all data has limitations and that we need to look at the various data sources in order to learn what we can learn from them.   But as important as it is to get good information about comparative effectiveness research and unfortunately we are still very much in the early stages of both the investment and of actually learning what it means for different classes or groups of individuals, they have particular types of interventions.  I’m an economist and so for me this is an effective building block to get information about what works best for whom, or what works better for whom under various circumstances.  It’s not the end of the decision making.  If we’re going to use this as a building block to get better outcomes, which I hope that we’ll continue to do, we also need to have better incentives in place.  We need to make sure that the financial incentives between the people providing the service and the technology and the people using the service are in sync, otherwise you’re not going to get a good outcome.  We need to make sure that we’re rewarding institutions and clinicians who provide high quality and efficiently produced care.  We talk a good game, we actually rarely do that very well.  We’ve talked about using value-based insurance in private sector. It is more absent than present, or it affects a very small part of the service that is delivered.  And ultimately we also need to understand that if we’re really going to get better at healthy outcomes for individuals, we are going to need to encourage them to be a part of the process also.  Encourage them, reward them for healthy lifestyles and maybe discourage them from other lifestyles.  It’s a critical component to have better information, but it’s not the only thing we’ll need to have the outcome we want.  Thank you.   SEAN TUNIS:  So good afternoon everybody.  I wanted to thank Sarah and the folks at the Alliance for inviting here.  Also wanted to thank the PCORI folks who are here, and all the folks at PCORI who for the last eight years I think have done a tremendous job of kind of advancing the field of patient centered or comparative effectiveness research.  You know, back when I think Gail started writing about this in 2006, the ideas were out there.  I think — or thanks to Gail and some others, you know, kind of pulling this off and creating a real science and discipline out of a field that didn’t exist before.  It’s been great to watch it happen and I think we always thought from the beginning that this was sort of a generational — you know, it was a generational agenda, not something — not an issue that was going to be solved in five or ten years.  So you know, I think things are on a good track.   I’m going to do a little bit of looking backwards and then looking forwards around comparative effectiveness research.  I think some things that have happened are really strong and powerful and moving the field forward, and there are some things that might be useful to emphasize more going forwards.  So back in the early 2000’s, I was working at the CMS.  I guess, Gail, when  you were there it was called HICVA.  Sometime after this, if you want to hear the stories about how Tom Scully decided on the name CMS, I’m happy to share those with you.  But the whole — so one of the functions that I oversaw at CMS was the coverage decision making and deciding which new technologies to pay for, and as part of that process we would ask the AHRQ — Agency for Healthcare Research and Quality — to do these hundreds of pages of reviews of all of the available evidence and what I’ve came to call the “evidence paradox” was this kind of what — this recurring problem that 19,000 randomized trials are published every year, tens of thousands of other clinical studies, and virtually every report that we got from AHRQ, not matter what the topic, concluded that the evidence is inadequate or poor quality.  And you’d think with 19,000 shots on goal, you know, every once in a while.  So this is a puzzling phenomenon.   So when you started to kind of pour down deeper into what the problems were with how research was being done, some of the issues were — these are a little miss formatted, but that’s probably my fault.  First of all, that the research agenda was being driven by clinical investigators, researchers, and not really informed by the target decision makers — patients, clinicians, payers, were very — had very limited involvement in deciding which research questions were going to get looked at.  Part of the problem just derived from who got this thing going.  Then typical studies were limited to narrow patient populations, they were often done in academic centers, not in contexts that were generalizable, and often times studies were designed comparing to placebo or treatments that were not in wide use.  So a big part of the sort of underlying issue was the research agenda just wasn’t aligned with what the real questions were that people who had to make healthcare decisions had to make.  So that was kind of the — that then, I think, is what primarily fed all of these gaps in evidence and the ability of anybody to make well informed decisions.   A way that I’ve come to sort of — underscoring this, is using — and finally, the most important thing and why I think the name “patient-centered outcomes research” is so critical, is it’s about making sure you’re actually measuring the outcomes that matter most to patients, not the outcomes that are the easiest to measure, or that are most reliably measured.  So this was a big issue.   It’s not important to read the details on this — but does anybody recognize what this comes from?   SPEAKER:   Consumer Reports.   SEAN TUNIS:  Yeah, it’s a page from Consumer Reports.  It happens to be a Consumer Reports electric screwdrivers.  The reason I’m using this is to see — you know, on the left is all the different — looks like Japanese electric screwdrivers — and then across the top you have all of the features that consumers care most about when they’re buying electric screwdrivers — like the speed and the power, the charge time, the run time, all of those things.  Well, if you think about it, all of those features that consumers care about are determined through careful focus groups and interviews.  They say, well, you know, what does a consumer care most about?  And then those are the things then that the engineers go and develop bench tests so that you can directly compare the different brands on those outcomes that matter to people who buy the service.  Believe it or not, this doesn’t happen in healthcare, or hasn’t happened, right?   So this would be a consumer reports table that you would find if you were looking at a drug.  Right?  It’s like, for some of them they measure the power and the charge time, but the studies don’t measure the run time, et cetera, et cetera.  So if Consumer Reports filled their pages with tables like this, nobody would buy the magazine.  You can’t possibly be an informed consumer.  And so I think one of the things that patient-centered outcomes research got right was, there’s no way you’re going to produce useful information unless you first concentrate on what actually matters to the people that are going to be using the services and apparently that was obvious for electric screwdrivers, but not obvious for medical devices or drugs or diagnostics, et cetera.   It looks like I’m only going to have time to do a little bit of kind of a political history that I think is relevant to today.   SARAH DASH:   You can take a little extra time, because that’s important.   SEAN TUNIS:  Okay.  So there’s people in the room who will clearly want to correct me, I hope they will stay quiet — no.  So the original draft legislation to create an institute to do comparative effectiveness, it was going to be called the Comparative Effectiveness Research Institute; that became a problem in the context of the Affordable Care Act debate, because that institute became thought of as the “Death Panel” that was going to determine whether old people were well enough to get a hip replacement, right?  So the institute that was planned became sort of synonymous with the death panel.  So the clever people, I think I’m pretty sure it was in the Senate, on the finance committee, decided to rename the Comparative Effectiveness Research Institute to the Patient-Centered Outcomes Research Institute, because who could be against patient-centeredness, right?   Well, once the name was changed and through a lot of effort to kind of get this — all of the language right et cetera, the PCORI leadership, once the institute existed, took the name seriously.  So they decided well, this is going to be a patient-centered outcomes research institute and what’s more, they decided cleverly, that to avoid any of that “death panel” spillover, they are going to avoid talking about costs or cost effectiveness or qualities or even payers.  But the problem with that is, as Gail said, one of the original inspirations for setting this thing up was to help people make cost-conscious comparative decisions in the face of unsustainable spending trends.  You know, it sort of narrowed the space within which I think PCORI felt comfortable initially operating.  Also, as a result of that, when the agency, I think, institute took seriously the idea of being patient-centered, they put a lot of energy into talking to patients and talking to patient groups and finding out, well, what were their important questions?  And it turns out, they had a lot of questions about things like care coordination and sort of the, how could they have a better care experience and weren’t so interested in those kind of Consumer Reports type questions — not that they were disinterested, but those didn’t rise to the surface.   Then sort of a last point — I won’t go over too much because we can come back to things in the Q&A, but what I think of the many positive unintended consequences of the name change was this very dramatic and important focus on what patients really needed to know and as a result — and I think Eleanor, you said this too, as a result of this sort of ah-ha moment that — taking into account the perspectives of patients actually would help you design more relevant studies in ways that were more informative to clinical decision making.  You know, that sort of percolated out and it influenced the National Institutes of Health, which started a whole pragmatic trials collaboratory to do, and much greater engagement of patients.  The FDA’s patient-focused drug development I think kind of exploded and expanded because of the work being done at PCORI.  And then I can tell you, because I work a lot with the Pharma device diagnostics industry, they’ve really gotten the message that they need to engage patients from phase one and phase two studies, and they are making decisions about which products to actually develop based on early feedback and input that they’re getting from the patient community.  That never happened before, and I think it’s going to mean that the products that start to appear now and in the future are really going to be those that have more meaningful benefits to patients and are actually studied in ways that will be much more helpful for patients and clinicians to be able to make informed decisions.   Sorry to have gone over, but thanks!   SARAH DASH:  Great.  Thank you, Sean.  So we’ll hear from Dr. Alfiee next.  And we’re going to come back to some of these points.   ALFIEE BRELAND-NOBLE:  Okay.  Good afternoon.  You are all so quiet.  Okay, I’m used to standing up, this is a little different for me.  So I want to resist the urge, but I can’t help myself, so I have to say it because she’s one of my idols.  My lovely colleague, Dr. Tunis has gone over just a little teeny — which is fine, but I’m going to be like Auntie Maxine and I’m going to reclaim my time.  But you were wonderful, I learned a lot.  Thank you so much.  All right, so I’m looking at the time, it’s ticking, and I like to talk so I’m going to try to get through this.   I’m going to say, good afternoon, and I’m just going to assume you’re all sending back good wishes because you are all so quiet.  So we will just go with that.  My name is Alfiee Breland-Noble and let me tell you real quick — in communities of color, I see a lot of diversity in there, we like titles, okay?  Because sometimes the title as a person of color — I’m clearly an African-American woman — the title confers the kind of respect that sometimes you don’t get in other settings.  So when we say Dr. Alfiee, it’s not because I want to dismiss all of the wonderful people up here on the stage with me, it’s just because that’s what we do in communities of color, right?  So I have to set that stage, because that’s a part of what we’re going to talk about with my portion of this work.  So as my colleague so wonderfully — she got through it — I was so proud every time somebody gets the acronym.  They get all the way through it and they don’t stumble, I want to jump up and cheer.  Because it took me a long time to put that acronym together many years ago.   So what we do with the AAKOMA project is we focus on the mental health needs.  We started with African-American youth as an underserved population.  Why?  Because they are significantly underserved in terms of the services that they get for mental illness, right?  They don’t get the same services as other folks.  We have expanded that to include other communities of color over the years, right?  So my one saying that I always share with people — I do a lot of public speaking and some TV work, “a rising tide lifts all boats”.  So if we can figure out how to reach the young people who are the least likely to be touched by good treatments — good treatments being the operative term — for depression particularly, anxiety and other mental illnesses more broadly, then we can teach everybody how to get good treatment, right?  And so when we look at what the disparities are, the disparities are not necessarily focused on the prevalence of mental illness, particularly depression, because when you look across racial ethic groups, those rates are about the same in terms of our epidemiological research.  Where you see the difference is in who gives care and what is the quality of care that they get.   So that’s what we’re about.  We want to educate people about these issues with the AAKOMA project in collaboration with community. Right?  So when we talk about patients, a lot of times people get the impression it’s us and them.  I’m a provider, many folks up here are providers or have been providers in some way, and those folks out there are patients.  We are all patients.  Right?  At some point in our lives, we are all patients.  So what I try to do is keep that at the forefront in my mind when I’m working with community members.  So I like to think in terms of community, which, PCORI, they know I love them, they have been very supportive of my work — we are called patients.  Right?  So we are thinking about community and patients, interchangeably.   This is just a picture that reflects some of the work that we do.  Almost all of our work is in collaboration with community.  We do a lot of community engaged work and a lot of community-based participatory research in the context of PCORI and CER.  A lot of alphabet soup.  So I share with you just a teeny bit.  Again, I’m reclaiming my time.  And I want to share with you that what are some of these barriers to care faced by young people of color?  So you see there are four listed here:  Decreased access to, less availability of good mental health services, lower likelihood of receiving required services.  The single biggest provider of mental health care for African-American youth based on the research is the juvenile justice system.  That’s when our kids get care.  Why?  In part because there are disparities in terms of you take the same constellation of symptoms like depression.  You have a white child, you have a black child or a Latin X child.  The white child is going to get mental healthcare, the black child and Latin X child are going to be diverted into juvenile justice, right?  So with the same set of symptoms.  That’s a problem, right?  Because you’re not getting the kind of care necessarily that you might like to get if what you’re diagnosed with is not depression, it’s a disruptive behavior problem.  You’re going to get different kinds of treatment.  Right?  So those have consequences.