The science of epidemiology

By James Hyde | Contributor

Part 1: “You can observe a lot just by watching.”

Two weeks ago we lost Yogi Berra, a great ball player and a keen observer of human behavior. One of Yogi’s heralded head-scratchers was his comment that “you can observe a lot just by watching.”  In fact, however, this is an elegant description of epidemiology, the basic science of public health. Epidemiology is the study of the patterns and distribution of disease and illness and their antecedents in human populations. It is an observational science. By observing, for example, a dramatic decrease in dental cavities among people drinking water with naturally occurring fluoride, we discovered the importance of fluoride in reducing dental disease.

People are often understandably frustrated by public health scientists’ failure to identify causal links between certain factors and illnesses in the face of seemingly overwhelming “circumstantial” evidence. To name just a few, think for a moment about ALS (Lou Gehrig disease) and military service in Iraq and Afghanistan, GMO’s and adverse health outcomes, EMF (electromagnetic field exposure) and cancer—all of which have been  characterized as cause and effect in the popular media.  And none  of which is scientifically proven.

For us to be able to interpret and think critically about the deluge of health information we receive, we need to understand a bit about the strengths and weaknesses of modern epidemiologic methods. Much of this may sound like rocket science. It is not.

The word “epidemiology” comes from Greek. It literally means the study (ology) of events around (epi) people (demos). Its goal is to identify patterns of disease and illness in people who may share certain characteristics versus those who do not, for example, by looking for disease patterns in smokers versus non-smokers. The goal is to understand causes. However, just because a certain factor may be present in a group with an illness and not present in those who are healthy does not mean the factor is a “causal” one. For example, increasing age is associated with the risk of dying, but age is not a cause of death. Rather, age is associated with conditions—heart attack, stroke—that often do result in death. So the first thing to remember in reading about any study is that “associations” should not be assumed to be “causal.”

Good researchers and scientists are very careful about this distinction. Social media, TV and newspapers (The Charlotte News excepted) not so much.

In the search for causes of disease and illness we are often forced for ethical reasons to study “free range” human beings, which can lead to all sorts of false conclusions. To illustrate this, consider a hypothetical study examining the relationship between cell-phone use and head and neck cancer. Because people have wildly different patterns of cell-phone use in terms of minutes used per day, use of “hands-free” devices and type of phone, not everyone will have the same exposure intensity. Subjects may also be exposed to many things in the course of their day that are known risk factors for the development of head and neck cancers: smoking, workplace chemicals, x-rays, etc. Quantifying all of these factors and taking them into account in the design of studies and analysis of data is a daunting task. In addition, studies like this require that we rely on participants for information about their history of cell-phone use. Since most people cannot recall even what they ate for lunch two days ago, obtaining reliable information is extremely difficult.

A key component of observational studies is to include subjects who are as alike as possible in ways other than, for example, their exposure to cell-phone radiation. Where could you possibly find such a group in the U.S. today? Perhaps one could recruit subjects in a socially or culturally isolated community such as the Amish. In that case, however, our comparison group would hardly meet the standard of comparability in terms of other lifestyle behaviors.

A further complication is that many of the diseases and illnesses we may want to study—cancers, heart disease, diabetes, lower back pain—take a long time to develop. As a consequence, observational studies require years of follow-up before a sufficient number of cases of a disease occur and can be studied. The rarer the disease or outcome, the larger the study population must be and the longer the wait for results. (Since time is money, this has a profound impact on cost.) This would certainly be a serious problem in our hypothetical cell-phone study.

There is also the issue of sample selection. Since we cannot study everyone with a cell phone, we must choose a sample of people for study. Whom we choose and how we choose them are critical. We can’t choose people who are too young or we will have to wait too long to observe cases of disease. We can’t choose people who are too old since many of them will have had a lifetime of exposures—even before cell phones were in use—that could explain the diseases we observe. But mainly we need to choose a sample that is large enough so we will be able to observe reasonably rare events, such as head and neck cancers, as they occur over time. All other things being equal, we want the largest and most representative sample we can afford to recruit and follow.

Finally, it is not unusual for studies to be reported in which researchers fail to find an association between an exposure (cell phones) and an outcome (head and neck cancers). This may result from there being no association to find or from a defect in the design of the study itself. In my experience, the failure to employ a sufficiently large sample size to find the proverbial “needle in the haystack” is almost always the dominant source of error.

Most of this discussion has focused on what can go wrong in searching for causes. However, in my lifetime many findings from observational studies have fundamentally changed medical and public health practice, for example in the treatment of blood pressure, the role of diet and exercise, and the treatment of gastric ulcer. The list goes on and on. But what this experience also teaches us is that we must think critically about new data and information and carefully consider possible sources of error. A corollary to this is that we should never accept the results of a single study. Perhaps the researchers missed something or, alternatively, saw something that wasn’t there.

In the next installment, I will discuss experimental studies that offer a powerful method of avoiding some of the pitfalls just discussed.

James Hyde lives in Charlotte and is emeritus associate professor of public health at the Tufts University School of Medicine. This is the first in a series of three columns helping readers to think critically about health research studies. 

Advertisements