I was very excited about the release of the POINTER study on July 30, 2025. This two-year, multi-site trial involving 2,111 older adults concluded that a structured program combining healthy lifestyle factors can significantly enhance cognitive function and does a better job than just giving people information and suggestions about dietary health, physical activity, problem solving, and social connection. Their results seemed to be a solid first step in preventing dementia.
Large-scale data analyses, such as the Lancet Commission[i], the US Health and Retirement survey[ii], and our own compilation[iii] had already found that up to half of all dementias, including Alzheimer’s Disease, would theoretically be prevented by correcting or improving 14 to 20 health and lifestyle factors. But all of these factors have been correlational or retrospective. They look at who gets dementia and who doesn’t and examine what went right or wrong. This study was going to be different. It would experimentally alter some of the frequently cited risk factors and see if that would improve thinking ability.
But as with any scientific research, you have to look beyond the headlines. Taking a closer look, my initial excitement gave way to a more complex and cautious interpretation. Let me dig into some of the nuances that temper my initial exuberance.
Who Was Actually Studied?
The first thing to consider is “who are we studying?” The researchers began with a group of older adults who were at risk for cognitive decline but weren’t yet experiencing significant impairment. This is a great place to start, but it provides a limited ability to generalize from the data. You see, a large number of people who wanted to participate were excluded because they already had healthy habits—they ate well, exercised regularly, and were not considered “at risk” enough. And while we know that dementia prevention often starts in our 40s or 50s, this group had to be 60 or older.
This means the study’s participants were not representative of the average older American adult. Instead, they reflected a subgroup of people who were out of shape, had poor diets, and were not mentally or socially stimulated. What does it say about my patient who is a physically active 65-year-old who already eats a reasonably good diet? The study doesn’t tell us.
Statistical Significance vs. Practical Impact
The POINTER study compared two groups: one receiving a highly structured, intensive intervention (supervised exercise for a half hour a day, focus on the MIND diet, computerized cognitive training three times a week, twice monthly support groups, and regular medical monitoring) and a second group that received a “self-guided” program of educational materials and encouragement.
The structured group did show a greater improvement in their composite cognitive scores (memory, executive function, processing speed) over two years, and this difference was considered statistically significant or unlikely to have occurred by chance. But was it a meaningful finding? That’s where things get interesting.
The difference in improvement between the two groups was very small. If we were to translate this to something more familiar, like an IQ score, the structured group’s average IQ would go from 100 to 107.2 over two years. Impressive. Except, the self-guided group’s average would also improve, from 100 to 106.4. That’s a difference of less than one IQ point. While both groups have higher scores, the practical difference between them is questionable. Does it make a difference in daily life? Does it meaningfully change their risk of developing dementia?
This raises a critical question: Was the expensive and resource-intensive structured intervention truly worth the effort compared to the much simpler, more cost-effective self-guided program? As Dr. Jonathan Schott pointed out in an accompanying editorial in the same issue of JAMA[iv], the educational approach of the self-guided group might be a far more efficient public health strategy. Essentially, a lot more bang for our buck.
Why a Control Group Matters
Perhaps the most significant limitation of the study is the absence of a true no-treatment control group. The study compared two different levels of intervention, but it didn’t compare either of them to a group that received no intervention at all. This leaves a major question unanswered: how much of the measured improvement was due to the interventions, and how much was simply due to the effect of repeated testing?
Participants were tested every six months. Repeatedly taking the same tests can lead to better scores, a phenomenon known as the practice effect[v]. The researchers attempted to account for this statistically, but without a control group to show us how much improvement is due to practice, it’s impossible to know the full impact. For a study of this scale and importance, the absence of a control group is a glaring and surprising omission that limits the conclusions we can draw.
Additionally, the “one-size-fits-all” approach of the study is a concern. We don’t know how much each component (diet, exercise, cognitive stimulation, social activity) contributed to the overall outcome. The study also did not address several other major dementia risk health factors, particularly hearing loss, diabetes, sleep apnea, alcohol consumption, and use of anticholinergic medications. We sacrifice the benefit of an individually tailored approach for this off-the-rack model.
While no study can do it all, the weaknesses in this study’s structure and methodology leave important questions unanswered. It remains unclear which way the POINTER is pointing.
