“You cannot manage what you do not measure.”
This is one of the most famous quotes in all of business school. Unfortunately, no one knows who first said it. Apparently, it’s a truth so self-evident that it’s been taught by every management guru and business prof ever. Pick your favorite guru, attribute the quote to him/her and you’ll probably be right.
The oldest mention I could find for this truism is from Lord Kelvin, the 19th century physicist who discovered absolute zero. Remember zero degrees Kelvin? Anyone who can measure 459.67 degrees below zero (Fahrenheit) knows a thing or two about measurement, although there is no evidence that he parlayed that into successful management career.
The basic idea is that you have to be able to quantify your current status, so that you can plan your next steps, so that you can know if and when you will reach your goal. In business, this is essential. Same for wilderness backpacking, United Way fund drives and telling kids on car trips how long ’til we get there.
How do we measure ‘better’ when we treat ADHD?
The answer to this is embarrassingly simple: we measure ‘better’ using the same yardstick we use for diagnosis. We ask how prevalent and impairing all of the core symptoms are, and how much the treatment has changed them.
Why is that embarrassing? Because very few of the people who treat ADHD actually do it. Most physicians don’t. Most therapists don’t. Most IEP committees don’t. Most ADHD coaches don’t. Prior to 2003, I didn’t either.
Prior to 2003, I began every patient visit by asking people how they (or their children) were doing. I’d record details of their answer. Research shows that stimulant therapies work about 80% of the time (with a 10% discontinuation rate for side effects). My patients’ experiences pretty much mirrored that.
Typical responses to the question, “How is the new medication working?” included “Wow!”, “Holy cow, what a difference!” and “We’re so pleased with the changes in our son that we’ve included you in our will.” Not really, but you get the drift.
In 2003, Strattera was introduced. Studies performed just like the stimulant studies found that it worked 60-70% of the time. It was great to have a new medication to work with, so I asked several dozen patients to try it. Every one of them came back and said something like, “I don’t think it’s doing very much.” When the standard of comparison is “Wow!” that’s pretty disappointing. Doctors like to think we’re helping to make a difference.
The disconnect between my patients and the studies plagued me. How could it work so well in the study clinics and not at all in my patients. Did Grand Rapids get a big batch of dud pills or what?
I re-read the studies to make sure that I was prescribing the medication properly. The main difference between my clinic and the research clinics wasn’t in how we used the medicine, but how we measured the results.
Research clinics measure all ADHD core symptoms at most or all visits. When they report results, they don’t say things like “We found 20% ‘Wow’, 50% ‘Holy cow’, and 30% ‘Don’t see much happening in my kid.'” They talk in technical terms like ‘symptom score reduction’ and ‘difference in the standard between-group mean’.
I really wanted my clinic to be more like the famous clinics at Harvard and SUNY, so I started remeasuring ADHD core symptoms at every visit. This isn’t profoundly complicated. My nurse instructed each patient to complete a one-page paper form prior to the office visit which we compared to the same form from prior visits.
The form we used was one of the twenty-five pages of forms patients had completed prior to their first, diagnostic visit. It asks patients to estimate prevalence of the 18 core symptoms. Patients didn’t complain about this additional step at office visits, probably because they were just grateful we weren’t repeating all 25 pages.
We found two interesting things when we started measuring ADHD symptoms. First, that Strattera actually did work 60-70% of the time. Second, we found that stimulants weren’t working as well as we thought. Statistically, they were not always a ‘Wow’. (This was because I wasn’t optimizing them, not because they weren’t effective.)
The reasons for these observations are complicated, so I’ll go into them in the next Attentionality post, “Getting Better: Comparing Treatments”.
[…] Next up: “Getting Better: Measuring Improvement“ […]
Is there any way a parent can get a hold of the one pager you use to assess if medications are working or not? I know my doctor doesn’t assess it and I am really not sure the concerta is helping my son or not. he just started and said it helps his golf, he can get over it more easily when he makes bad shots. But he’s not back to school yet. That will be the test. If he can actually go to class or not. I would really like to be able to track if meds are helping. the psychiatrist who saw him for a whole 7 minutes said the only way to know if he’s add is to try the meds. So basically I don’t even know if he even is ADD. But he seems to have some of the core symptoms. Thanks
You read my mind! I’ll be writing about this soon in one of the follow-up posts. The diagnostic form is available at http://www.ncfahp.org/Data/Sites/1/media/images/pdf/CHIP-Vanderbilt-parent.pdf The one page follow-up form is simply the first 18 questions from the longer diagnostic form.
In adults, the Adult Self-Report Scale (http://www.uvm.edu/medicine/ahec/documents/Adult_ADHD_Self_Report_Scale.pdf) can be used.
Neither of these scales can be used all by itself to diagnose ADHD, but they can be a part of the process.
Thank you, Dr Mason for the reminder that we need to measure those improvements. Your blogs are spot on and very helpful to those of us out on the ADHD front lines.
LikeLiked by 1 person
[…] from oren mason md on measuring improvement […]