“You cannot manage what you do not measure.”
This is one of the most famous quotes in all of business school. Unfortunately, no one knows who first said it. Apparently, it’s a truth so self-evident that it’s been taught by every management guru and business prof ever. Pick your favorite guru, attribute the quote to him/her and you’ll probably be right.
The oldest mention I could find for this truism is from Lord Kelvin, the 19th century physicist who discovered absolute zero. Remember zero degrees Kelvin? Anyone who can measure 459.67 degrees below zero (Fahrenheit) knows a thing or two about measurement, although there is no evidence that he parlayed that into successful management career.
The basic idea is that you have to be able to quantify your current status, so that you can plan your next steps, so that you can know if and when you will reach your goal. In business, this is essential. Same for wilderness backpacking, United Way fund drives and telling kids on car trips how long ’til we get there.
How do we measure ‘better’ when we treat ADHD?
The answer to this is embarrassingly simple: we measure ‘better’ using the same yardstick we use for diagnosis. We ask how prevalent and impairing all of the core symptoms are, and how much the treatment has changed them.
Why is that embarrassing? Because very few of the people who treat ADHD actually do it. Most physicians don’t. Most therapists don’t. Most IEP committees don’t. Most ADHD coaches don’t. Prior to 2003, I didn’t either.
Prior to 2003, I began every patient visit by asking people how they (or their children) were doing. I’d record details of their answer. Research shows that stimulant therapies work about 80% of the time (with a 10% discontinuation rate for side effects). My patients’ experiences pretty much mirrored that.
Typical responses to the question, “How is the new medication working?” included “Wow!”, “Holy cow, what a difference!” and “We’re so pleased with the changes in our son that we’ve included you in our will.” Not really, but you get the drift.
In 2003, Strattera was introduced. Studies performed just like the stimulant studies found that it worked 60-70% of the time. It was great to have a new medication to work with, so I asked several dozen patients to try it. Every one of them came back and said something like, “I don’t think it’s doing very much.” When the standard of comparison is “Wow!” that’s pretty disappointing. Doctors like to think we’re helping to make a difference.
The disconnect between my patients and the studies plagued me. How could it work so well in the study clinics and not at all in my patients. Did Grand Rapids get a big batch of dud pills or what?
I re-read the studies to make sure that I was prescribing the medication properly. The main difference between my clinic and the research clinics wasn’t in how we used the medicine, but how we measured the results.
Research clinics measure all ADHD core symptoms at most or all visits. When they report results, they don’t say things like “We found 20% ‘Wow’, 50% ‘Holy cow’, and 30% ‘Don’t see much happening in my kid.'” They talk in technical terms like ‘symptom score reduction’ and ‘difference in the standard between-group mean’.
I really wanted my clinic to be more like the famous clinics at Harvard and SUNY, so I started remeasuring ADHD core symptoms at every visit. This isn’t profoundly complicated. My nurse instructed each patient to complete a one-page paper form prior to the office visit which we compared to the same form from prior visits.
The form we used was one of the twenty-five pages of forms patients had completed prior to their first, diagnostic visit. It asks patients to estimate prevalence of the 18 core symptoms. Patients didn’t complain about this additional step at office visits, probably because they were just grateful we weren’t repeating all 25 pages.
We found two interesting things when we started measuring ADHD symptoms. First, that Strattera actually did work 60-70% of the time. Second, we found that stimulants weren’t working as well as we thought. Statistically, they were not always a ‘Wow’. (This was because I wasn’t optimizing them, not because they weren’t effective.)
The reasons for these observations are complicated, so I’ll go into them in the next Attentionality post, “Getting Better: Comparing Treatments”.