Lee A. Resnick, MD, FAAFP
Urgent care medicine is a rapidly evolving discipline. Out of this evolutionary process, scientific skepticism is naturally born. It is the checks and balances of medicine, ensuring that what is purported to be true, is in fact based on evidence, not speculation.
We welcome this inquiry and support the process necessary to lend legitimacy to what has been mere estimation and speculation, thus far, in the development of our discipline.
This is why UCA has committed the time and money to support such groundbreaking efforts as the recently announced sampling frame results, and the upcoming benchmarking study. For the first time in the history of urgent care medicine, we will have scientifically validated data to support the contributions of the urgent care industry to the healthcare delivery system. These data will provide the backbone for future study, both clinical and healthcare services research.
But our work is not done. Each of you, as individual practi- tioners of urgent care medicine, has the same mandate for scientific inquiry. There are a few basic principles for evaluating clinical studies that may be useful as you evaluate the literature for potential relevance to urgent care. I would like to review those principles here:
- What question is the study trying to answer?
- What is the quality of the evidence?
- How valid are the results?
- What is the relevancy to your practice?
There are several tools available to assist you in evaluation of clinical results. My favorite is the “PP-ICONS”¹ approach developed by Robert Flaherty, MD.
The approach is summarized below:
- Problem: What is the clinical condition being studied? This can easily be answered simply by reading the
- Patient or population: Is the group being studied similar to your patient population? This is critically important to the applicability of the results to your Data collected from emergency department patients may not be wholly applicable to urgent care practice.
The same can be said of primary care data. This does not mean there will be no relevancy, but the reader must interpret the data with these population differences in mind.
- Intervention: What is the test or treatment being studied? For example: abdominal U/S for evaluation of appendicitis; or antivirals for Bell’s
- Comparison: What is the intervention being tested against? In the above examples, this could be abdominal CT, or prednisone
- Outcome: We are particularly interested in clinically relevant This will limit the relevancy of many articles you see in the scientific literature.
- Number: Denotes the “power” of the More than 400 patients usually denotes adequate power. Fewer than 100 patients will make it difficult for the authors—and there- fore, the readers—to draw conclusions.
- Statistics: Review of all statistical terms and their relevance is beyond the scope of this letter, but one of the most clinically useful statistics is the “number needed to treat” (NNT). Simply, this is the number of patients who must be treated for one person to If the study does not report a NNT, it can be calculated utilizing the absolute risk reduction (NNT = 1/ARR). A “good” NNT is dependent on a number of variables, including risk and cost of intervention, but NNTs of 5 to 10 are usually considered reasonable.
I hope this helps you on your journey through the scientific literature. It is imperative for us to critically evaluate the evidence for quality, validity, and relevance to our discipline.
We want JUCM to be your forum for this discussion, so please share with us your findings and thoughts.