Making a decision involves several information processing steps within the time from the presentation of a stimulus to the response. The total time required for the completion of each of these processing steps is the reaction time (RT). Specific processes differ between experimental paradigms, but a minimal set that seems to be agreed upon involves encoding of the choice-relevant features of the stimuli, followed by weighting the evidence for each choice, and initiating a response (Donders, 1868; Ratcliff and McKoon, 2008; Zylberberg et al., 2011; Luce, 1986). Despite being an almost two century-old problem (Helmholtz, 1850), it is unclear how the RT emerges from these putative components.
The answer to this problem has first been hampered by the relatively poor information gained from RT and response-accuracy data alone. Co-registering physiological signals can clarify and extend conclusions about information processing steps in the RT (Turner et al., 2017). Evidence for the putative components that make up RT has been found by registering the electroencephalogram (EEG) during decision tasks. First, a negative deflection in occipital electrodes happening around 200ms after the presentation of a choice, the N200, has been associated with visual encoding of the choice elements by participants (Nunez et al., 2019; Ritter et al., 1979). Second, EEG data has shown that the weighting of evidence toward the alternatives is associated with a positive voltage developing over centro-parietal electrodes after early visual potentials (Kutas et al., 1977). Computational models of decision-making explain the experimental effects observed on these centro-parietal components as an evidence accumulation mechanism (O’Connell et al., 2012; Kelly et al., 2021). Lastly, a component preceding the response has been shown to lateralize with the side of the executed response (Coles et al., 1985). This lateralized readiness potential (LRP) has later been described as arising from an accumulation-to-bound mechanism describing the decision to produce a movement (Schurger et al., 2012).
However, the knowledge gained on the nature and latencies of cognitive processes within the stimulus-response interval from such electrophysiological components is limited by the low signal-to-noise ratio of classical neural measurements. To improve the SNR, researchers usually rely on information derived from the averaging of these signals over many trials. Unfortunately, averaging time-varying signals will result in an average waveform that misrepresents the underlying single-trial events (Luck, 2005; Borst and Anderson, 2024). In the case of decision-making, several studies have shown wide trial-by-trial variation of the timing of cognitively relevant neural events (Vidaurre et al., 2019; Smyrnis et al., 2012; Weindel et al., 2021; Weindel, 2021). Furthermore, averaged components are further distorted by the fact that multiple cognitive processes and associated EEG components are typically present within trials and overlap in time between trials (Woldorff, 1993), forcing researchers to study physiological components in isolation. A few studies have been able to simultaneously investigate multiple EEG components in decision-making using single-trial approaches. As an example, Philiastides et al., 2006 used a classifier on the EEG activity of several conditions to show that the strength of an early EEG component was proportional to the strength of the stimulus, while a later component was related to decision difficulty and behavioral performance (see also Salvador et al., 2022; Philiastides and Sajda, 2006). Furthermore, the authors interpreted that a third EEG component was indicative of the resource allocated to the upcoming decision given the perceived decision difficulty. In their study, they showed that it is possible to use single-trial information to separate cognitive processes within decision-making. Nevertheless, their method requires separate classifiers for each component of interest, limiting the analysis to existing theory of distinct components.
One potential solution mixing both behavior and multivariate analysis of single-trial neural signal to achieve single-trial resolution has emerged through the development of the hidden multivariate methods (Weindel et al., 2024; Anderson et al., 2016). These methods model the neural data of each trial as a sequence of short-lived multivariate cortex-wide events, repeated at each trial, whose timing varies on a trial-by-trial basis and define the RT. In the case of EEG, it is assumed that any cognitive step involved in the RT is represented by a specific topography recurring across trials. The time jitter in the topography is accounted for by estimating, for each of these events, a trial-wise distribution where the expected time of the peak of the topography is given by the time distribution of the previous event’s peak and the expected time distribution of the current event. By constraining, through the recorded behavior, the search for trial-shared sequential activations in the EEG during estimated ranges of time, the hidden multivariate pattern (HMP) model (Weindel et al., 2024) provides an estimation of the number of events and their single-trial latency during each trial. Previous similar approaches have shown that different information processing steps can be extracted from the EEG in a wide range of tasks (Berberyan et al., 2021; Zhang et al., 2018; Anderson et al., 2016; Anderson et al., 2018; Krause et al., 2024). Building on previous work (van Maanen et al., 2021), we expect that the EEG data of a decision-making task will be decomposed into task-relevant intervals indexing the information processing steps in the RT. In the current study, we combine this single-trial modeling strategy with strong theoretical expectations regarding the impact of experimental manipulations on the latent information processing steps during decision-making.
The task of the participants was to answer which of two Gabor patches flanking a fixation cross displayed the highest contrast (Figure 1, top panel). On a trial-by-trial basis, we manipulated the average contrast of both patches but kept the difference between them constant (see the two example trials in Figure 1, one with an average contrast of 5%, and one with an average contrast of 95%, both with a difference of 5%). We hypothesize that this contrast manipulation generates two opposing predictions on encoding and decision processes (Weindel et al., 2022) associated with two of the oldest laws in psychophysics: Piéron’s law (Piéron, 1913) and Fechner’s law (Fechner, 1860).
Contrast manipulation used in the experiment.
Top shows two example stimuli illustrating minimum (left) and maximum (right) contrast values. The bottom panel shows the prediction for the Piéron, the Fechner, and the linear laws for all contrast levels () used in the study for a fixed set of parameters. The y-axis refers to the time predicted by each law given a contrast value (x-axis) and the chosen set of parameters. , , and are respectively the estimated participant-specific intercept, slope, and exponent for the three laws. The Fechner diffusion model additionally includes nondecision and decision threshold parameters (see ‘Materials and methods’).
Piéron’s law predicts that the time to perceive the two stimuli (and thus the choice situation) should follow a negative power law with the stimulus intensity (Figure 1, green curve). In contradistinction, Fechner’s law states that the perceived difference between the two patches follows the logarithm of the absolute contrast of the two patches (Figure 1, yellow curve). As the task of our participants is to judge the contrast difference, Piéron’s law should predict the time at which the comparison starts (i.e., the stimuli become perceptible), while Fechner’s law should implement the comparison, and thus decision, difficulty. Given that Fechner’s law is expected to capture decision difficulty, we connected this law to evidence accumulation models by replacing the rate of accumulation with Fechner’s law in the proportional rate diffusion model of Palmer et al., 2005. This linking with an evidence accumulation model further allows connecting the RT to the proportion of correct responses. To test the generalizability of our findings and allow comparison to standard decision-making tasks, we also included a speed–accuracy manipulation by asking participants to either focus on the speed or the accuracy of their responses in different experimental blocks.
