A couple of years ago I wrote an article explaining one of the most interesting phenomena in statistics, Simpson's Paradox. For those who have forgotten (or just didn't read it), Simpson's paradox occurs when a single set of data is analyzed by two different, equally valid methods, and the results of the two methods give opposite conclusions. The reason for this is that often a data set contains information that is not included in the analysis but is important to the results, such as drug tests that find one drug works better than the other but only when the age of the patient is ignored, or reviews of company hiring policies that find the company is biased toward hiring certain races or genders while none of the individual departments show any biases. (In the first case, the analysis didn't consider that younger people are naturally healthier than older people, and so if the study gives a weaker medicine to young people and a stronger medicine to older people, the weak medicine looks better. In the second example, each department is fair in its hiring but overall more men applied for a large department that hires a lot of people and more women applied for smaller departments that hire very few people.)

While Simpson's Paradox is an interesting statistical phenomena, it is not the only one. There is also a closely related effect called Berkson's Paradox, and I feel especially compelled to write about it today after reading a pseudo-academic article on the side effects of synthetic drugs. I am not going to weigh in on the question of whether western medicine is better than eastern medicine, or whether natural herbs are as effective as pharmaceuticals that have undergone double-blind control studies, or whether you can die of an overdose if you forget to take your homeopathic pills. But in this particular case I cannot stand to see statistics abused to promote a cause.

In this study, the authors took data from a medical clinic that was monitoring the side effects of a certain medication. Different patients take different dosages, and have different levels of side effects as a result of taking the medication. And according to the analysis provided with this study, there is a counter-intuitive effect in which patients who took low doses had a large number of serious side effects, while patients on higher doses experienced very few side effects. This is a bizarre result, and a perfect example of Berkson's paradox.

To demonstrate, consider what the 'real' data would look like. If a researcher does a proper, controlled study in which a wide range of people are randomly assigned a dosage of the new medication, and then carefully examined for any side effects, the data might look like this:

Although this is random data generated for this demonstration, it shows the expected correlation between dosage (on the horizontal axis) and side effects (on the vertical axis). The higher the dose, the greater the side effects, just as expected.

Unfortunately not all researchers use the best methods of conducting such an experiment - especially if they already know the result they want to get. 

Suppose that a researcher decides to look at data from patients who visit a particular medical clinic. These are people who are already taking the medication for some illness, and who are visiting the doctor with a specific medical complaint. The researchers are still collecting information on the different side effects that occur when people take different dosages, and so believe that the study will be equally valid - but they are wrong.

Consider the people who are not included in this second study. If someone is not very sick, they do not take very much medicine and they also have few if any side effects. And so that person is unlikely to go to the medical clinic just to tell the doctor that everything is going fine. That means the study does not include low dose/low side effect patients, and all the data in the dark blue region shown here will be excluded.

On the other end of the data spectrum, consider someone who is very sick and needs a lot of medication, and who suffers a lot of side effects. They are not going to be seeing their local physician, but rather they will be seeing a specialist in their illness or perhaps even be admitted to the hospital. Or if the side effects are too great, their doctor will take them off the medication completely and they also won't be included in the study. Either way, the people in the high dosage/high side effect region (denoted by green in this diagram) are also not going to be included in the study either. 

With those two groups excluded from the study, only a narrow strip remains in the data set. These are people who need a high dosage, but they handle the medication well and do not need to be hospitalized, as well as people who do not need much medication to treat their illness, and probably would not be visiting the clinic at all except that they are so sensitive to the drug that they must see a physician regularly to handle the side effects.

And that means that in this second study, they are drawing conclusions from a small subset of the real data and in this subset there is a negative correlation. People only go to the clinic if they are either quite sick but can handle the medication well enough to stay out of hospital, or if they are not very sick at all but are very sensitive to the medication and visit the doctor to deal with the side effects. Which means that among clinic patients, the higher the dose the more likely it is that they are not experiencing any side effects.

The conclusion of this study is then that higher dosages of this medication result in less side effects. It isn't true at all, but is solely due to the very limited selection of data points that get included. 

And while this might seem obvious, there have been countless headline grabbing stories that fall victim to this exact effect. Like with Simpson's Paradox, Berkson's Paradox can turn fact into fiction and fiction into fact. It makes one realize that we should all be a little more careful when reading about the latest research!