“Expose every belief to the light of reason, discourse, facts, scientific observations; question everything, be skeptical because this is the only chance at life you will ever get.” – James Randi

Magicians have been a part of society for centuries.  Although the historical timeline is fuzzy as to when magic first appeared, accounts of the Acetabularii performing tricks with cups and balls is noted as early as 50 – 300 AD.Magic shows have served not only as entertainment, but as a distraction from the doldrums of everyday life.  They are an invigorating placeholder to divert our attention from important matters at hand.

Unfortunately, today, many of us in the Physical Medicine field are acting as magicians – except capes and top hats have been traded for white coats and reflex hammers.  Interventions are often presented to patients as incredible magical tools that will correct any and all ailments.  If, for whatever reason, a tool is selected from the magical toolbox that is unsuccessful in reducing patient symptoms or improving function, then another tool is selected – and so on and so forth – constantly increasing patient cost.  The choice of using an intervention becomes an unscientific, subjective endeavor of trial and error.  Too often we as clinicians select modalities based on unscientific clinical expertise as efficacy for their use.  Interventions and their selection criteria is the focus of this blog.

In blog 1 we discussed the fallacies associated with Sackett’s pillars in evidence based practice, specifically as it relates to clinical experience.2 This experience is rehearsal, each time perfecting the magic show, but who is being fooled: the patient or the clinician?  Research has demonstrated that increased rehearsal time does not equate to better patient outcomes or improved expertise.3,4,5  Using clinical experience as the rationale for utilizing an intervention predisposes us to employing modalities that have not gone through the rigors of scientific testing.  These interventions are often based on ill-founded methodology and, at best, will obtain a dodgy outcome (placebo effect).  As clinicians, we should think probabilistically regarding the likelihood of success an intervention will have for a patient’s particular issue.  In order to weigh the benefits versus the risk of an intervention, we must understand what an intervention is doing – which requires scientific research.  This research provides us with a better understanding if our intervention of choice is doing what we claim and is applicable to a patient population – leading to greater likelihood for a positive long term outcome.

Evidence should guide our decisions about what (if any) modalities to utilize in patient treatment.  Otherwise, we fall prey to a post hoc, ergo propter hoc fallacy defined as “after this, therefore, resulting from it”, meaning a causal relationship is erroneously ascribed to two events occurring in sequence. In regard to clinical practice, a patient has “X” symptom/complaint, then we intervene with “Y” modality and subsequently patient’s symptom/complaint improves.  We often want to believe we are the cause for change, however, research continuously demonstrates that almost any intervention works in the short term and this often is merely regression to the mean.6  The question becomes, are lasting long term outcomes being achieved in a patient’s issue?

Often, under the guise of evidence based medicine, low quality studies are cited as further validation for an intervention’s use.   Although statistical significance is a necessary component to consider when weighing one treatment to be superior to another, statistical significance does not necessarily yield clinical significance.7  If we wish to operate under evidence-based medicine more time has to be dedicated to reviewing properly executed studies’ method sections.

But, we cannot simply blame our faith in clinical experience and lack of scientific investigatory effort for the misguided utilization of ill-founded interventions.  Each year, hundreds if not thousands of continuing education courses, which could be more appropriately called conned courses—or for the purpose of this blog con(n)-ed—are offered to clinicians. Granted, not all continuing education courses are conning us, some are evidence based with noble intentions of educating the field, but, unfortunately this is not the norm. Additionally, professional boards require us to take some combination of courses to maintain our licenses. The demand of professional boards to satisfy continuing education hours has opened up a market for courses with no substantial offerings to clinical practice patterns under the guise of evidence.

Con(n)-ed Courses

Con(n)-ed courses have managed to make an exorbitant amount of money off of the idea of developing clinical expertise.  Con(n)-ed courses are driven by clinicians who claim to have the ability to obtain objective patient outcomes with their “magical” tools and systems of choice.  These magicians often travel the world bestowing upon the course participants their product, all the while charging hefty fees to see their “show.” They espouse their products and systems by using fancy words (functional movement, joint centration, fascial meridian lines, subluxations, myofascial adhesions, detoxification) for unfounded issues, normal variants, or conditions that have already been well defined. These magicians operate under the self-bestowed title of expert or guru and have managed to garner as much support and financial gain as most political campaigns with somehow even less substance.

Many of us enroll in con(n)-ed courses in hopes of finding the “Ultimate Trick” to correcting patient ailments. Unfortunately, this can cause us to become blinded by the enlightened feeling received from participation in the course. Our clinical practice narrows via the courses’ framework as we begin applying the specified constructs to all patient cases.  Consequently, we fall prey to the risk of becoming too attached to our modality of choice, self-identifying with the magical tool (joint manipulations, IASTM, K-Tape, cupping, dry needling, etc).  Our personal biases developed by this framework mislead us to think the patient’s short term relief correlates with long term outcomes.  Instead of evidential support through research, system gurus have been given a powerful stage to dictate what interventions should be utilized.

The sunk-cost fallacy makes it difficult for us or the gurus to admit our losses after spending hundreds if not thousands of dollars for certifications; especially once evidence to the contrary of the system is presented. Instead, patients and clinicians alike continue on, with smoke and mirrors, financially investing year after year under the premise it is making us better.  Proudly sporting our acronym(s) of choice after our name.

This shifts the issue from patient needs to our drive for confirmation of our own skills/ability. We want to see our trick work via patient affirmation. Ultimately, this places the locus of control in treatment on we the clinician rather than the patient.

The Process of Modality Selection

The decision making process for utilizing a modality is being approached incorrectly in these weekend courses. Instead of discussions to figure out the best answer to patient issues, gurus are often focused on indoctrination.  We as clinicians need to regularly read research on the validity of modalities and question the framework of a modality’s efficacy.  We do not have to execute research studies.  However, if a guru wishes to sell an intervention, then he or she should be held accountable to present the research first to validate his or her claims.  The burden of proof always lies with the person making the claim.  Currently, these unsubstantiated claims must reach critical mass before the research-oriented clinicians debunk the intervention and demonstrate the modality or system’s lack of correlation to long term patient outcomes.  The research, more often than not, reveals the patient’s original path would have led to health anyway, but that’s not magic, and not nearly as marketable. Unfortunately, by this point, the intervention has misguided an immeasurable number of patients and resulted in increased healthcare costs.

Many passive modalities are being based on the false premise of “Tooth Fairy Science”.  Dr. Harriet Hall explains this idea well:

“You could measure how much money the Tooth Fairy leaves under the pillow, whether she leaves more cash for the first or last tooth, whether the payoff is greater if you leave the tooth in a plastic baggie versus wrapped in Kleenex. You can get all kinds of good data that is reproducible and statistically significant. Yes, you have learned something. But you haven’t learned what you think you’ve learned, because you haven’t bothered to establish whether the Tooth Fairy really exists.”8

If we utilize passive modalities lacking evidential support, we are operating under the premise we know which patient issues necessitate an intervention and the effect that particular intervention will have on the patient.  In reality, we have mounting evidence proving that the issues we are often intervening on are normative variants that occur regularly in asymptomatic populations.9,10,11,12  Or, through natural disease progression many of the issues being intervened on will with time resolve themselves.  The chosen modalities are not acquiring long lasting change, but yet the magic show keeps re-occurring.  Most likely the show goes on due to a powerful placebo effect and/or confirmation bias. This is why we need valid scientific research.  As Danna G. Young recently stated,

“[Science] is a method of investigation designed to protect us from our most glaring weaknesses:  egotism, selective-perception, ethnocentrism, premature closure, and the human tendency to always think ‘I AM RIGHT.’”

Science is the antithesis of magic, and our greatest protection against being fooled.

Over the next several blogs, our hope is to present the evidence or lack thereof for current in vogue passive interventions like:  Kinesiology Tape (K-Tape), Instrument Assisted Soft Tissue Manipulation, Cupping, and Dry Needling.

Although magic shows are phenomenal entertainment to distract us from the mundane of daily life, we should not be selling unfounded magical tools to our patients.  They deserve better from us as clinicians.  We should remember our obligation to patients – ensure our interventions are scientifically founded and guide their path towards long term outcomes.  Anything else is confirming our own biases, while inflating patient costs. Remember, as chiropractors, physical therapists, or other physical medicine licensed clinicians, we are not an intervention – we are individual clinicians who comprise a field.  Each of us utilize interventions and therefore being evidence-based about the efficacy of an intervention is our duty to patients.  Hopefully, together we can answer difficult questions regarding patient ailments, interventions, and achieving long term outcomes.

Reminder: If you want to discuss the article with us on Twitter, or just recommend a beer for us to try you can find us at:

@DMilesPT

@MichaelRayDC

References

  1. “History of Magicians – Timeline | All About Magicians.com.” All About Magicians. N.p., n.d. Web. 23 Aug. 2016.
  2. “Finding Balance on a One Legged Stool: Part 1.” The Logic of Rehab. N.p., 2016. Web. 18 Aug. 2016.
  3. Macnamara BN, Moreau D, Hambrick DZ. The Relationship Between Deliberate Practice and Performance in Sports: A Meta-Analysis. Perspectives on psychological science : a journal of the Association for Psychological Science. 11(3):333-50. 2016.
  4. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: the relationship between clinical experience and quality of health care. Annals of internal medicine. 142(4):260-73. 2005.
  5. Whitman JM, Fritz JM, Childs JD. The influence of experience and specialty certifications on clinical outcomes for patients with low back pain treated within a standardized physical therapy management program. The Journal of orthopaedic and sports physical therapy. 34(11):662-72; discussion 672-5. 2004.
  6. Menke JM. Do manual therapies help low back pain? A comparative effectiveness meta-analysis. Spine. 39(7):E463-72. 2014.
  7. Zarbin MA. Challenges in Applying the Results of Clinical Trials to Clinical Practice. JAMA ophthalmology. 134(8):928-33. 2016.
  8. Hall, Harriet. “Another Acupuncture Study – On Heartburn « Science-Based Medicine.” Another Acupuncture Study – On Heartburn « Science-Based Medicine. Science Based Medicine, n.d. Web. 18 Aug. 2016.
  9. Brinjikji W, Luetmer PH, Comstock B. Systematic literature review of imaging features of spinal degeneration in asymptomatic populations. AJNR. American journal of neuroradiology. 36(4):811-6. 2015.
  10. Andrade NS, Ashton CM, Wray NP, Brown C, Bartanusz V. Systematic review of observational studies reveals no association between low back pain and lumbar spondylolysis with or without isthmic spondylolisthesis. European spine journal : official publication of the European Spine Society, the European Spinal Deformity Society, and the European Section of the Cervical Spine Research Society. 24(6):1289-95. 2015.
  11. Frank JM, Harris JD, Erickson BJ. Prevalence of Femoroacetabular Impingement Imaging Findings in Asymptomatic Volunteers: A Systematic Review. Arthroscopy : the journal of arthroscopic & related surgery : official publication of the Arthroscopy Association of North America and the International Arthroscopy Association. 31(6):1199-204. 2015.
  12. Beals CT, Magnussen RA, Graham WC, Flanigan DC. The Prevalence of Meniscal Pathology in Asymptomatic Athletes. Sports medicine (Auckland, N.Z.). 2016.