摘要
In this issue of Pediatrics, Antommaria et al report an analysis of the quality of evidence and strength of recommendations from the current evidence-based clinical practice guidelines (CPGs) of the American Academy of Pediatrics (AAP).1 The good news is that they found that pediatric guideline recommendations report a similar level of evidence as other specialties. The bad news is that only 10% of recommendations have the highest level of evidence.The authors point out that this represents an opportunity to improve the quality of the evidence supporting pediatric CPGs, and I am in strong agreement. Most people, especially parents, would be dissatisfied knowing that only 10% of our children's health care is guided by the highest level of scientific evidence. We need a robust childhood research agenda focused on the important gaps identified during the development of CPGs.It is unlikely that every step in a complex CPG, let alone the actual process of clinical care, will ever have the strength of a randomized controlled trial behind it. Nonetheless, it is critically important to be transparent about the level of evidence supporting a recommendation so the user can decide how obligated she is to follow it.Although the criteria for rating the quality of evidence appear objective, when it comes to grading a specific recommendation, there can be a lot of judgement when choosing between making a "Strong recommendation," "Recommendation," or "Option" to carry out an action. Those of us who have had the pleasure of serving on AAP CPG committees are all too aware that disagreements about the quality of evidence, and the strength of recommendations, especially, can be contentious. The epidemiological skill of committee members is variable, and committee representatives from different subspecialties may disagree because of the spectrum effect2: variation in the prevalence and severity of a condition between their practices. This situation is further complicated by the large number of evidence-grading systems available.3,4 Even the AAP has begun experimenting with different grading systems.One consequence of disagreement or uncertainty among the members of a CPG committee can be that they present recommendations in vague terms, underspecifying exactly who should do what to whom and under what circumstances. Underspecified terms like "timely follow-up," "frequent screening," or "consider antibiotics" make implementing a CPG challenging for the clinician and will do little to promote consistent, high-quality care. Thankfully, the AAP has worked hard to eliminate such vagueness form its CPGs. The AAP has a special team, the Partnership for Policy Implementation, that specifically addresses vagueness in its CPGs.5 Clarity of the recommendations is as important as the evidence supporting it.In fact, CPG committees will often concern themselves with being too prescriptive, particularly when the evidence is not strong and there is disagreement among committee members. In those circumstances, they will sometimes intentionally introduce vagueness into the language of the recommendation. However, this makes the recommendation less helpful to the clinician. Moreover, clinicians will deviate from even a strong recommendation when they feel the circumstances warrant it. After all, no CPG can anticipate every clinical situation.Antommaria et al also point out that some "strong recommendations" are based on "X-level" evidence. X-level evidence is applied to recommendations that are very unlikely to ever be rigorously tested but for which there is a clear preponderance of benefit over harm. As a frivolous example, consider the recommendation to wear a parachute when jumping out of a flying plane. It is clearly a good idea, but no one is likely to conduct a randomized controlled trial. Given the challenges to conducting pediatric research that Antommaria et al point out, one can imagine that pediatrics may have many X-level recommendations. This is OK because clinicians crave expert guidance in the setting of ignorance. Expert opinion will always win out when a clinician is faced with a novel clinical situation, but again, transparency about the evidence and strength of recommendation can give the clinician latitude when the situation calls for it.One area Antommaria et al do not comment on is the large number of guideline recommendations the AAP produces. They found 14 current and active clinical practice guidelines with 236 recommendations. That's a lot of recommendations for a pediatrician to follow, but of course, these are only the evidence-based CPGs, the rarest of reports produced by the AAP. The much larger number of clinical reports and, of course, Bright Futures: Guidelines for Health Supervision of Infants, Children, and Adolescents, contain thousands of care recommendations.6 It has long been observed that the number of care guidelines vastly exceeds what can be done in a typical office visit.7,8A rational solution for this problem is to prioritize services that are provided to children so that each child receives the most beneficial care for his or her needs. However, prioritizing services for each child is a nontrivial exercise. The categorization of recommendations from the AAP is a step, albeit crude, toward such a prioritizing strategy because combining a measure of benefit over harm with a rating of evidence quality to form a strength of recommendation is a kind of summary priority score. If we felt confident that recommendations were scored equivalently across AAP guidelines, we might apply those with the highest strength of recommendation even at the expense of neglecting those with a lower rating. Of course, we might want a more refined and consistent score, but CPG strength of recommendation is a start.I applaud Antommaria et al for undertaking a careful examination of AAP CPG recommendations. Their findings highlight the importance of ensuring a robust child health research agenda based on research priorities identified in AAP CPGs. Taken in the context of how CPGs are used in clinical practice, we can appreciate the importance and value of taking a rigorous approach to rating the quality of evidence and strength of recommendations in the CPGs we produce.