Before medical school I worked in a research lab investigating the relationship between stress and memory. As a research assistant, I dutifully administered memory test and collected saliva samples to test for cortisol levels.
My boss sent the data to her statistician for analysis, and was thrilled to find that despite the lack of connection between most of the variables of stress and memory studied, there was one positive finding—a connection between hippocampal volume (the part of the brain associated with memory) and life-long stress. I helped to write the article, and it was published in a major medical journal.
It wasn’t until years later that I realized my boss was engaging in an extremely common, but scientifically misleading, practice called data mining. Instead of having one specific hypothesis (i.e. that high cortisol levels would correlate with memory loss as measured on one specific test), she looked at numerous variables associated with stress and memory with the hope that some positive result could be dug out from the heaps of data.
The problem with this approach is that if you test enough variables, you will, by chance alone, find a result that meets the somewhat arbitrary criteria for “statistical significance” that will allow your work to be published as a positive finding. For example (and this is an oversimplified example), if you flip a quarter enough times, eventually you may gets heads 10 times in a row, but this doesn’t mean you’ve got a special quarter.
The pressure in the scientific community to publish positive results is enormous. After all, negative results don’t often advance careers or bring fame and fortune to the person who publishes them. It is not uncommon for researchers to data mine, to do statistical acrobatics, and even in more extreme cases to flat-out create fake data in order to get published (the New York Times reported earlier this year that Diederik Stapel, a Dutch social scientist and academic star, had risen to fame by faking experiments that were published in major journals for over a decade).
As Stapel admitted to the reporter writing the story, “My behavior shows that science is not holy.”
In my article on the placebo effect, I referenced Irving Kirsch, who published a paper, and then a popular book, arguing that antidepressants act only as placebos in the vast majority of people. His work seemed well-researched and well-referenced, and he even contacted the FDA to get results of unpublished trials done by pharmaceutical companies (pharmaceutical companies generally avoid publishing results that show their medications don’t work—shocking, I know).
But then—last week I watched a lecture by two psychiatrists and researchers who went through Kirsch’s work and pointed out a few significant problems with it. Kirsch argued that antidepressants don’t work for people with mild or moderate depression. However, he misclassified mild, moderate, and severe symptoms as mild, moderate, and severe depression, when, by definition, a person must have severe symptoms in order to be diagnosed with depression at all (if you only have a few severe symptoms, you would be diagnosed with mild depression).
When these psychiatrists reanalyzed Kirsch’s data using the correct classification, they found that antidepressants worked better than placebo for people with both moderate and severe depression, and that it was only in cases of mild depression where antidepressants didn’t beat the placebo.
Sound complicated? Well, the TV show 60 Minutes agreed. After Kirsch appeared on the show telling the world that antidepressants were no better than placebo, these two psychiatrists spent over an hour on the phone with the producers explaining the issues with his research.
Apparently they listened attentively, and agreed that the issue was more complicated that Kirsch made it seem, but decided not to have them on the show to counter. Whereas statistics is a complex science, news outlets like to push out stories that are simple and sensationalized.
So there are problems with the way research is conducted and reported, but what’s a well-intentioned healthcare practitioner to do? It doesn’t make sense to completely ignore a large body of scientific research that can help us practice medicine in a safer and more effective way. Instead of ignoring scientific research, or preaching it from the rooftops as if it were gospel, I propose a middle way.
First, let’s be honest that much of what we currently believe to be true in medicine will be disproven at some point. I remember the first time I ever heard that something I had learned as dogma in medical school was totally wrong; it was only six months after I graduated.
This reality is perhaps more true in psychiatry than in any other field, where we deal with an incredibly complex organ (the brain), and an even more complex system that is connected to the brain in a way we don’t fully understand (the mind).
Second, knowing that our knowledge base is ever evolving, let’s not be afraid to use some common sense. Do I really need a research study showing that being an empathetic and supportive physician improves outcomes to believe that it’s true? Or if a patient tells me a natural supplement helps them, should I tell them, “No, you’re wrong” just because there’s not yet a double-blinded, placebo-controlled trial on it?
Which brings me to my next point—the concept of “evidenced-informed” rather than “evidence-based” medicine, which I first heard proposed at a conference on integrative medicine. Integrative medicine is a field that takes a holistic view of the patient, and combines both traditional, allopathic treatments with complementary and alternative approaches, such as acupuncture, homeopathy, herbal medicine, etc.
Skeptical allopathic doctors will often say that there’s “no proof” these alternative therapies work, but I’d argue that’s not the whole story. Instead, when evaluating the evidence base of a treatment (whether traditional or alternative), let’s consider its 1) safety, 2) tolerability, 3) cost, 4) efficacy, and 5) convenience.
Examining a treatment like chemotherapy, for example, which is potentially dangerous, expensive, and inconvenient, we should require a lot of research showing that it is effective before recommending it. But take a treatment like a multivitamin, which is safe, cheap, without side effects, and convenient. Do we really need a dozen positive studies before recommending it to our patients?
Lastly, let’s be skeptical of people or organizations with obvious agendas, no matter how much research they quote. With thousands of articles published a day, it’s easy to find a few dozen that support your agenda, whatever it may be.
Recently I read a book by the journalist Robert Whitaker called Anatomy of an Epidemic: Magic Bullets, Psychiatric Drugs, and the Astonishing Rise of Mental Illness in America. He argued, persuasively, that many psychiatric medications, most notably antipsychotics, are more harmful than helpful, and I naively assumed he was presenting an accurate and complete picture of the relevant literature.
At a few points I found myself rolling my eyes at his biased and selective stories of patients who were miraculously cured when they stopped taking their medications, but it wasn’t until I got to the chapter on lithium and Bipolar Disorder that I had to put down the book in disgust.
It was here that it became abundantly clear that Whitaker had selectively cited articles that supported his agenda, while leaving out dozens of well-known and well-done studies that presented a different, and more nuanced, point of view. Sigh. I guess sensationalism must sell more books than nuance.
At the opposite end of the spectrum, it’s probably wise to be skeptical of research presented and published by pharmaceutical companies. Earlier this year I was at the annual conference of the American Psychiatric Association, and during one of the breaks found myself wandering to the booth of Otsuka America Pharmaceuticals because they were offering free ice cream (yes, I’m easy to manipulate).
As I waited in line for my ice cream, I tried to avoid making eye contact with any of the reps, but one started chatting with me anyway. I told him I didn’t have much experience working with reps because because they’re not allowed on campus at the academic medial center where I work.
He looked at me, aghast, and asked, “But how do you get education, then?” I was speechless. Umm… residency? Conferences? Lectures? Articles? Books? Supervisors? Mentors? I think I’m fine on the education front, thanks.
So in sum, instead of practicing “evidence-based medicine” as if the evidence were dogma, let’s use research to move the practice of medicine forward while also remembering its limitations, including biases against publishing negative results and the agendas of the people doing or reporting the literature.
Let’s consider practicing “evidence-informed medicine” instead, where we consider factors like safety, tolerability, and cost when deciding how much evidence we need to see before recommending a treatment.
And lastly, let’s stay humble, respect how complex the body and brain are, and remember that there’s much that we still don’t know.
***
Photo by Sergei Golyshev
EEW says
Wow…Wow and triple Wow….Elana
Thank you for, what I call a “Mini-thesis”, on the discovery and self awareness of the “Culture” or “Pack” mentality of Scientific research, pros and cons!! Excellent points.
I’m not even in the field of mental illness/imbalances and received an eye opening Laymen’s term lesson on this subject after reading the complete blog :))
Again, I’ll quote their “Pack (wolves/dogs) Culture” mentality within the industry. You are a wealth of knowledge and “quick-study” on these subjects and I’m starting to realize, without subscribing to other Mental Wellness Bloggers, that I am getting a free education for the betterment of preparing me for the Healthy Minds job and the environment with discretion and silence.
I offer my “Attitude of Gratitude”…
James Clear says
Hey friend — really enjoyed reading this piece. I thought it was a balanced view of the topic and I especially liked that you offered the reminder that there is still much that we don’t know and that will change in the near future.
Keep carrying the banner for other healers!
Dr. J says
I could tell you some stories about what I’ve seen with scientific research, lol!
I have almost fifty publications in the scientific literature and have never done this. Perhaps because I take the road less traveled 🙂
I realized long ago that as alchemy was replaced by the scientific method, that method is not the end all and be all either!
As long as there are doctors like you out there, there is hope for actual progress!
Trevor Smith, FNP says
I recall a study about stress, anger/negative emotions shrinking the hippocampus, gyri, etc., and the brain overall- years ago- ironically, I don’t remember the details. Maybe, I need to do my ginkgo, phosphatidylserine, Acetyl-L-carnintine supplement trio for a few days! Nicely-written article about productivity in the workplace, even if in a scientific workplace intended to be neutral, unbiased in search of truth independent of outcomes.
Andri Nieuwoudt says
Fantastic article- congrats. I am in the surgical field and in this area it is even more difficult to convince unless you qoute “random contolled trails” or to get articles published if you either do not toe the line or are part of the established group. I have tried to get my voice heard- a simple voice telling fellow physicians that the prolapsed vaginal wall is a symptom of an underlying “underlying disease” and not a a diagnosis in itself. The art of vaginal surgery is practised by many, but mastered by few.
Arthur M. Strauss says
I like your sharing your observation and interest in integration. As a curious dentist who co-founded the Academy now the American Academy of Dental Sleep Medicine, I find myself up against a dilemma for us all. That is the separation between Medicine and Dentistry and the politics and Money that likely control research from looking into how it is fully integrated within the body through the jaw-tongue-throat relationship that controls our airway and , thus our ability to breathe, the most influential provoker of the stress response.
The absence of this makes Medicine and Dentistry unscientific! I am not prepared to write a dissertation on this now; however you can find the information in my articles (Sleep apnea articles) posted on my website. Then if you are really serious about tackling this further, contact me!
Robert D. Stolorow says
This is a really good article! “Evidence-informed” is a much better term than “evidence-based,” which smacks of Scientism, about which I too have written: https://www.psychologytoday.com/blog/feeling-relating-existing/201206/scientism-in-psychotherapy
Courtney says
A-freakin-men!
Jason says
Interesting perspective and well-thought out article.
However, you seem to have misinterpreted what evidence based medicine is…
By definition it is
1. evidence
2. practitioner experience
3. patient preferences.
Some (that lack evidence or have negative evidence) seem to want to slant the conversation away from evidence and towards other stuff.
Cheers.
Kevin Urbanek says
You make excellent points. Keep on keeping on.