Science is organized common sense where many a beautiful theory was killed by an ugly fact. – Thomas Huxley
It really is a nice theory. The only defect I can think it has is probably common to all philosophical theories. It’s wrong – Saul Kripke
I received the following question (abbreviated slightly) regarding blog 18 Percentage Change from Baseline – Great or Poor?
What about the case where you have just a single group of subjects measured at two time-points (say, baseline and follow-up)? This often occurs in medical studies, as you know.
Then, is percent change still not useful for each subject?
With regard to this specific design and percentage change, you can compute the mean difference between baseline and post. Alternatively you could compute the difference between baseline and post divided by baseline (assuming you have a ratio level scale) and present the median percent change. [Note: for reasons mentioned in my blog, the individual percentage changes tend to have a highly positively skewed distributions, So I would use medians rather than means.]
But what does it mean?
A single group measured at two-time points with an intervention in between is an internally flawed (horrible) design. Many effects could cause a true mean change from baseline to post. Unfortunately, it is typically not only the (medical) treatment. For example, natural changes in people (e.g., spontaneous remission, natural healing, regression to the mean), season, selection of subjects (sick patients come to doctors because they are ill – at any later time point they aren’t as ill), subjects saying the nice doctor helped them, etc (and there is a very long list of potential other reasons to explain the difference). Most of these alternative (non-treatment) factors bias the results to make the second observation appear better. The single group pre-post is a truly horrible design. This is not a true experimental design. Campbell and Stanley referred to this design as a Pre-Experimental Design or a quasi-experimental design (see page 7 of their book).
The very first study I professionally analyzed was a 4 week drug intervention in depression. Yes, the patients treated with our medication (Amoxapine) changed 13 points. Fortunately, the study was a randomized and blinded study, with patients treated with drug or placebo. It was only because we included placebo patients, who had a mean change of 7, that we could deduce that our drug had a 6 point drug-treatment effect. Without the placebo group, the 13 points could have been solely a placebo or any number of similar effects.
Unfortunately, most experimentalists can make NO credible interpretation as to why the one-group pretest-posttest design [percentage] difference is what it is. One could say, ‘we saw a 13 point difference … or a 31% median percentage improvement relative to baseline’. But there is a major leap from saying ‘there was a change’ to saying that ‘we saw a change due to the treatment’. They typically put two disjoint effects together in the sentence and make such a implication, such as ‘the 31 patients, WHO RECEIVED TREATMENT X, had a 13 point difference … .’
Unfortunately, as the commenter noted, this is a frequently used design, especially in the medical device industry. For such a design to work, the scientists MUST believe that patients are static and unchanging – a patently and demonstrably false assumption. But then again, they would seldom hire a ‘real’ statistician to review their study. They typically use students who only had single Stat course to analyze their data. They don’t like to be told their head of clinical operations is incompetent or they are too cheap to run a real study.
Again, the One-Group Pretest-Posttest study is NOT a real experiment, it is little more than a set of testimonials (One-Group (informal Pretest-) Posttest ‘study’ with much missing data). You could compute the change and percentage change, but it cannot be interpreted, hence any conclusions – data analysis – is meaningless. The ONLY good that can come of such a trial is the promise of doing a real trial.