Contemporary digital cognitive assessment tools differentiate response accuracy from response latency, revealing cognitive struggles that may not be evident in standard testing. The approach leverages time‑based measures to separate performance correctness from the underlying cognitive process.
Latency measures and reaction‑time metrics have long been used to produce finer insight into neural processing speed and response variability. Elevated intraindividual variability in reaction times has been found highly sensitive to subtle neurodegenerative change and frequently outperforms conventional accuracy‑only measures in identifying early Alzheimer disease and mild cognitive impairment.1
According to neurocognitive researchers David Libon, PhD, and Rodney Swenson, PhD, latency metrics can be considered neurocognitive biomarkers for early detection of dementia, monitoring disease trajectories, and enhancing diagnostic precision. The researchers spoke with Patient Care recently about their recent paper on the subject and in the short video above, Libon underscores how latency‑based assessment augments traditional screening.
Rodney A Swenson, PhD is Clinical Professor of Psychiatry and Behavioral Sciences, at the University of North Dakota School of Medicine and Health Sciences in Grand Forks, ND.
David J Libon, PhD, is a Professor at the New Jersey Institute for Successful Aging, at at Rowan University, in Glassboro, NJ.
Reference
-
Christ BU, Combrinck M, Thomas KGF. Both Reaction Time and Accuracy Measures of Intraindividual Variability Predict Cognitive Performance in Alzheimer’s Disease. Front. Hum. Neurosci. 2018;12. doi:10.3389/fnhum.2018.00124
Patient Care: Your recent paper focuses on the importance of latency and time based measures during digital assessment. How are the timing parameters critical in detecting subtle neurocognitive changes associated with very early dementia.
David J Libon, PhD: One thing we can do very easily now with digital technology is dissociate the correctness of a response from how long it takes to generate that response. For example, I might say, “I’m going to say three words, and after I say them, repeat them back to me.” If someone responds quickly, “Toothbrush, cigarette, pen,”that’s a perfectly fine, 100% correct answer. But someone else might say, “Um, let’s see… what was the first word? Uh, tooth… toothpaste… cigar, wait, no, no, no—toothbrush, cigarette, pen.” That’s also scored as 100% correct.
But as you can hear in that second example, the amount of time it takes, and the amount of non-productive speech, are very different. These factors may indicate that the patient is struggling. With time-based, or latency, measures, we’re able to separate the accuracy of a response from the cognitive process required to generate it.
So even if someone is giving fully correct answers, if their timing parameters fall outside defined thresholds, current research suggests they may fall into a different category—one that indicates they are potentially at risk, or that there may be other evidence consistent with a diagnosable mild cognitive impairment (MCI), for example.