Most analyses of reaction time (RT) data are conducted by using the statistical techniques with which psychologists are most familiar, such as analysis of variance on the sample mean. Unfortunately, these methods are usually inappropriate for RT data, because they have little power to detect genuine differences in RT between conditions. In addition, some statistical approaches can, under certain circumstances, result in findings that are artifacts of the analysis method itself. A corpus of research has shown more effective analytical methods, such as analyzing the whole RT distribution, although this research has had limited influence. The present article will summarize these advances in methods for analyzing RT data.

Reaction time (RT; also called response time or latency), the time taken to complete a task, has been a common dependent measure in psychology for many years. Most researchers analyze RT data by conducting an analysis of variance (ANOVA) on the sample mean (Van Zandt, 2002): this type of statistical approach may not be effective, however, owing to the particular characteristics of RT data. Statistically, RTs are treated as random variables: that is, observed RTs, even from the same subject in the same condition, vary somewhat across trials. Reaction times collected in a particular experimental condition are assumed to represent a sample of the population of RTs from that condition. They are assumed to be identically and independently distributed (iid), although this is rarely the case in practice because of factors, such as fatigue and sequential effects, that are generally assumed to be of negligible impact and are therefore ignored (cf. Thornton & Gilden, 2005). Importantly, response-time distributions are not Gaussian (normal) distributions but rather rise rapidly on the left and have a long positive tail on the right (see Figure 1). Reaction-time distributions are similar to the ex-Gaussian distribution (Luce, 1986), which is a convolution (mixture) of a Gaussian and an exponential distribution that has been shown to fit empirical RT distributions well (e.g., Balota & Spieler, 1999). This distribution has three parameters. The mean and the standard deviation of Gaussian (the left hump) are described by mu ([mu]) and sigma ([delta]), respectively. Tau ([tau]) describes both the mean and the standard deviation of the exponential component (the right tail).

Typically, some observed RTs are not a result of the process of interest. For example, Luce (1986) demonstrated that genuine RTs have a minimum value of at least 100 ms: time needed for physiological processes such as stimulus perception and for motor responses. Reaction times below this value could be the result of fast guesses, for example. It is easy to identify these very fast RTs, and they are normally eliminated by using a cutoff of between 100 ms and 200 ms. Response times in the middle of the distribution that are due to spurious processes are impossible to identify, because they are intermixed with genuine RTs. There is nothing that can be done--beyond tight experimental control during the task itself--to attenuate the effects of these responses. It is quite common...