Behaviorism is the conceptual framework underlying the science of behavior. The science itself is often referred to as the experimental analysis of behavior or behavior analysis. Modern behaviorism emphasizes the analysis of conditions that maintain and change behavior as well as the factors that influence the acquisition or learning of behavior. Behaviorists also offer concepts and analyses that go well beyond the common-sense understanding of reward and punishment. Contemporary behaviorism provides an integrated framework for the study of human behavior, society, and culture.
Within the social sciences, behaviorism has referred to the social-learning perspective that emphasizes the importance of reinforcement principles in regulating social behavior (McLaughlin 1971). In addition, sociologists such as George Homans and Richard Emerson have incorporated the principles of behavior into their theories of elementary social interaction or exchange (Emerson 1972; Homans 1961). The basic idea in social exchange approaches is that humans exchange valued activities (e.g., giving respect and getting help) and that these transactions are "held together" by the principle of reinforcement. That is, exchange transactions that involve reciprocal reinforcement by the partners increase in frequency or probability; those transactions that are not mutually reinforcing or are costly to the partners decrease in frequency over time. There is a growing body of research literature supporting social exchange theory as a way of understanding a variety of social relationships.
SOME BASIC ISSUES
The roots of behaviorism lie in its philosophical debate with introspectionism—the belief that the mind can be revealed from a person's reports of thoughts, feelings, and perceptions. Behaviorists opposed the use of introspective reports as the basic data of psychology. These researchers argued for a natural-science approach and showed how introspective reports of consciousness were inadequate. Reports of internal states and experiences were said to lack objectivity, were available to only one observer, and were prone to error. Some behaviorists used these arguments and also others to reject cognitive explanations of behavior (Skinner 1974; Pierce and Epling 1984; but see Bandura 1986 for an alternative view).
The natural-science approach of behaviorism emphasizes the search for general laws and principles of behavior. For example, the quantitative law of effect is a mathematical statement of how the rate of response increases with the rate of reinforcement (Herrnstein 1970). Under controlled conditions, this equation allows the scientist to predict precisely and to regulate the behavior of organisms. Behavior analysts suggest that this law and other behavior principles will eventually account for complex human behavior (McDowell 1988).
Contemporary behaviorists usually restrict themselves to the study of observable responses and events. Observable events are those that are directly sensed or are made available to our senses by instruments. The general strategy is to manipulate aspects of the environment and measure well-defined responses. If behavior reliably changes with a manipulated condition, the researcher has established an environment-behavior relationship. Analysis of such relationships has often resulted in behavioral laws and principles. For example, the principle of discrimination states that an organismPage 209 | Top of Article will respond differently to two situations if its behavior is reinforced in one setting but not in the other. You may talk about politics to one person but not to another, because the first person has been interested in such conversation in the past while the second has not. The principle of discrimination and other behavior principles account for many aspects of human behavior.
Although behaviorism usually has been treated as a uniform and consistent philosophy and science, a conceptual reconstruction indicates that there are many branches to the behavioral tree (Zuriff 1985). Most behavior analysts share a set of core assumptions; however, there are internal disputes over less central issues. To illustrate, some behaviorists argue against hypothetical constructs (e.g., memory) while others accept such concepts as an important part of theory construction.
Throughout the intellectual history of behaviorism, a variety of assumptions and concepts has been presented to the scientific community. Some of these ideas have flourished when they were found to further the scientific analysis of behavior. Other formulations were interesting variations of behavioristic ideas, but they became extinct when they served no useful function. For instance, one productive assumption is that a person's knowledge of emotional states is due to a special history of verbal conditioning (Bem 1965, 1972; Skinner 1957). Self-perception and attributional approaches to social psychology have built on this assumption, although researchers in this field seldom acknowledge the impact. In contrast, the assumption that thinking is merely subvocal speech was popular at one time but is now replaced by an analysis of problem solving (Skinner 1953, 1969). In this view, thinking is behavior that precedes and guides the final performance of finding a solution. Generally, it is important to recognize that behaviorism continues to evolve as a philosophy of science, a view of human nature, and an ideology that recommends goals for behavioral science and its applications.
THE STUDY OF BEHAVIOR
Behaviorism requires that a scientist study the behavior of organisms for its own sake. Behaviorists do not study behavior in order to make inferences about mental states or physiological processes. Although most behaviorists emphasize the importance of biology and physiological processes, they focus on the interplay of behavior and environment.
In order to maintain this focus, behaviorists examine the evolutionary history and physiological status of an organism as part of the context for specific environment-behavior interactions. For example, a biological condition that results in blindness may have profound behavioral effects. For a newly sightless individual, visual events, such as watching television or going to a movie no longer support specific interactions, while other sensory events become salient (e.g., reading by braille). The biological condition limits certain kinds of behavioral interactions and, at the same time, augments the regulation of behavior by other aspects of the environment. Contemporary behaviorism therefore emphasizes what organisms are doing, the environmental conditions that regulate their actions, and how biology and evolution constrain or enhance environment-behavior interactions.
Modern behaviorists are interested in voluntary action, and they have developed a way of talking about purpose, volition, and intention within a natural-science approach. They note that the language of intention was pervasive in biology before Darwin's functional analysis of evolution. Although it appears that giraffes grow long necks in order to obtain food at the tops of trees, Darwin made it clear that the process of evolution involved no plan, strategy of design, or purpose. Natural variation ensures that giraffes vary in neck size. As vegetation declines at lower heights, animals with longer necks obtain food, survive to adulthood, and reproduce; those with shorter necks starve to death. In this environment (niche), the frequency of long-necked giraffes increases over generations. Such an increase is called natural selection. Contemporary behaviorists insist that selection, as a causal mode, also accounts for the form and frequency of behavior during the lifetime of an individual. A person's current behavior is therefore composed of performances that have been selected in the past (Skinner 1987).
An important class of behavior is selected by its consequences. The term operant refers to behavior that operates upon the environment to produce effects, outcomes, or consequences. OperantPage 210 | Top of Article behavior is said to be emitted because it does not depend on an eliciting stimulus. Examples of operant behavior include manipulation of objects, talking with others, problem solving, drawing, reading, writing, and many other performances. Consequences select this behavior in the sense that specific operants occur at high frequency in a given setting. To illustrate, driving to the store is operant behavior that is likely to occur when there is little food in the house. In this situation, the operant has a high probability if such behavior has previously resulted in obtaining food (i.e. the store is open). Similarly, the conversation of a person also is selected by its social consequences. At the pub, a student shows high probability of talking to his friends about sports. Presumably, this behavior occurs at high frequency because his friends have previously "shown an interest" in such conversation. The behavior of an individual is therefore adapted to a particular setting by its history of consequences.
A specific operant, such as opening a door, includes many performance variations. The door may be opened by turning the handle, pushing with a foot, or even by asking someone to open it. These variations in performance have a common effect upon the environment in the sense that each one results in the door being opened. Because each variation produces similar consequences, behaviorists talk about an operant as a response class. Operants such as opening a door, talking to others, answering questions, and many other actions are each a response class that includes a multitude of forms, both verbal and nonverbal.
In the laboratory, the study of operant behavior requires a basic measure that is sensitive to changes in the environment. Most behaviorists use an operant's rate of occurrence as the basic data for analysis. Operant rate is measured as the frequency of an operant (class) over a specified period of time. Although operant rate is not directly observable, a cumulative recorder is an instrument that shows the rate of occurrence as changes in the slope (or rise) of a line on moving paper. When an operant is selected by its consequences, the operant rate increases and the slope becomes steeper. Operants that are not appropriate to the requirements of the environment decrease in rate of occurrence (i.e., decline in slope). Changes in operant rate therefore reflect the basic causal process of selection by consequences (Skinner 1969).
Behavior analysts continue to use the cumulative recorder to provide an immediate report on a subject's behavior in an experimental situation. However, most researchers are interested in complex settings where there are many alternatives and multiple operants. Today, microcomputers collect and record a variety of behavioral measures that are later examined by complex numerical analysis. Researchers also use computers to arrange environmental events for individual behavior and provide these events in complex patterns and sequences.
CONTINGENCIES OF REINFORCEMENT
Behaviorists often focus on the analysis of environment-behavior relationships. The relationship between operant behavior and its consequences defines a contingency of reinforcement. In its simplest form, a two-term contingency of reinforcement may be shown as R(Sr. The symbol R represents the operant class, and Sr stands for the reinforcing stimulus or event. The arrow indicates that "if R occurs, then Sr will follow." In the laboratory, the behavior analyst arranges the environment so that a contingency exists between an operant (e.g., pecking a key) and the occurrence of some event (e.g., presentation of food). If the presentation of the event increases operant behavior, the event is defined as a positive reinforcer. The procedure of repeatedly presenting a positive reinforcer contingent on behavior is called positive reinforcement (see Pierce and Epling 1999).
A contingency of reinforcement defines the probability that a reinforcing event will follow operant behavior. When a person turns the ignition key of the car (operant), this behavior usually has resulted in the car starting (reinforcement). Turning the key does not guarantee, however, that the car will start; perhaps it is out of gas, the battery is run down, and so on. Thus, the probability of reinforcement is high for this behavior, but reinforcement is not certain. The behavior analyst is interested in how the probability of reinforcement is related to the rate and form of operant behavior. For example, does the person continue to turn the ignition key even though the car doesn't start? Qualities of behavior such as persistence, depression, and elation reflect the probability of reinforcement.
Reinforcement may depend on the number of responses or the passage of time. A schedule of reinforcement is a procedure that states how consequences are arranged for behavior. When reinforcement is delivered after each response, a continuous schedule of reinforcement is in effect. A child who receives payment each time she mows the lawn is on a continuous schedule of reinforcement. Continuous reinforcement produces a very high and steady rate of response, but as any parent knows, the behavior quickly stops if reinforcement no longer occurs.
Continuous reinforcement is a particular form of ratio schedule. Fixed-ratio schedules state the number of responses per reinforcement. These schedules are called fixed ratio since a fixed number of responses are required for reinforcement. In a factory, piece rates of payment are examples of fixed-ratio schedules. Thus, a worker may receive $1 for sewing twenty pieces of elastic wristband. When the ratio of responses to reinforcement is high (value per unit output is low), fixed-ratio schedules produce long pauses following reinforcement: Overall productivity may be low, leading plant managers to complain about "slacking off" by the workers. The problem, however, is the schedule of reinforcement that fixes a high number of responses to payment.
Reinforcement may be arranged on a variable, rather than fixed, basis. The schedule of payoff for a slot machine is a variable-ratio schedule of reinforcement. The operant involves putting in a dollar and pulling the handle, and reinforcement is the jackpot. The jackpot occurs after a variable number of responses. Variable-ratio schedules produce a high rate of response that takes a long time to stop when reinforcement is withdrawn. The gambler may continue to put money in the machine even though the jackpot rarely, if ever, occurs. Behavior on a variable-ratio schedule is said to show negative utility since people often invest more than they get back.
Behavior may also be reinforced only after an interval of time has passed. A fixed-interval schedule stipulates that the first response following a specified interval is reinforced. Looking for a bus is behavior that is reinforced after a fixed time set by the bus schedule. If you just missed a bus, the probability of looking for the next one is quite low. As time passes, the rate of response increases with the highest rate occurring just before the bus arrives. Thus, the rate of response is initially zero but gradually rises to a peak at the moment of reinforcement. This response pattern is called scalloping and is characteristic of fixed-interval reinforcement. In order to eliminate such patterning, a variable-interval schedule may be stipulated. In this case, the first response after a variable amount of time is reinforced. If a person knows by experience that bus arrivals are irregular, looking for the next bus will occur at a moderate and steady rate because the passage of time no longer signals reinforcement (i.e., arrival of the bus).
The schedules of reinforcement that regulate human behavior are complex combinations of ratio and interval contingencies. An adjusting schedule is one example of a more complex arrangement between behavior and its consequences (Zeiler 1977). When the ratio (or interval) for reinforcement changes on the basis of performance, the schedule is called adjusting. A math teacher who spends more or less time with a student depending on the student's competence (i.e., number of correct solutions) provides reinforcement on an adjusting-ratio basis. When reinforcement is arranged by other people (i.e., social reinforcement), the level of reinforcement is often tied to the level of behavior (i.e., the greater the strength of response the less the reward from others). This adjustment between behavior and socially arranged consequences may account for the flexibility and variability that characterize adult human behavior.
Human behavior is regulated not only by its consequences. Contingencies of reinforcement also involve the events that precede operant behavior. The preceding event is said to "set the occasion" for behavior and is called a discriminative stimulus or Sd. The ring of a telephone (Sd) may set the occasion for answering it (operant), although the ring does not force one to do so. Similarly, a nudge under the table (Sd) may prompt a new topic of conversation (operant) or cause the person to stop speaking. Discriminative stimuli may be private as well as public events. Thus, a headache may result in taking a pill or calling a physician. A mild headache may be discriminative stimulus for taking an aspirin, while more severe pain sets the occasion for telephoning a doctor.
Although discriminative stimuli exert a broad range of influences over human behavior, thesePage 212 | Top of Article events do not stand alone. These stimuli regulate behavior because they are an important part of the contingencies of reinforcement. Behaviorism has therefore emphasized a three-term contingency of reinforcement, symbolized as Sd:R(r)Sr. The notation states that a specific event (Sd) sets the occasion for an operant (R) that produces reinforcement (Sr). The discriminative stimulus regulates behavior only because it signals past consequences. Thus, a sign that states "Eat at Joe's" may set the occasion for your stopping at Joe's restaurant because of the great meals received in the past. If Joe hires a new cook, and the meals deteriorate in quality, then Joe's sign will gradually lose its influence. Similarly, posted highway speeds regulate driving on the basis of past consequences. The driver who has been caught by a radar trap is more likely to observe the speed limit.
CONTEXT OF BEHAVIOR
Contingencies of reinforcement, as complex arrangements of discriminative stimuli, operants, and reinforcements, remain a central focus of behavioral research. Contemporary behaviorists are also concerned with the context of behavior, and how context affects the regulation of behavior by its consequences (Fantino and Logan 1979). Important aspects of context include the biological and cultural history of an organism, its current physiological status, previous environment—behavior interactions, alternative sources of reinforcement, and a history of deprivation (or satiation) for specific events or stimuli. To illustrate, in the laboratory food is used typically as an effective reinforcer for operant behavior. There are obvious times, however, when food will not function as reinforcement. If a person (or animal) has just eaten a large meal or has an upset stomach, food has little effect upon behavior.
There are less obvious interrelations between reinforcement and context. Recent research indicates that depriving an organism of one reinforcer may increase the effectiveness of a different behavioral consequence. As deprivation for food increased, animals worked harder to obtain an opportunity to run on a wheel. Additionally, animals who were satiated on wheel running no longer pressed a lever to obtain food. These results imply that eating and running are biologically interrelated. Based on this biological history, the supply or availability of one of these reinforcers alters the effectiveness of the other (Pierce, Epling, and Boer 1986). It is possible that many reinforcers are biologically interrelated. People commonly believe that sex and aggression go together in some unspecified manner. One possibility is that the availability of sexual reinforcement alters the reinforcing effectiveness of an opportunity to inflict harm on others.
CHOICE AND PREFERENCE
The emphasis on context and reinforcement contingencies has allowed modern behaviorists to explore many aspects of behavior that seem to defy a scientific analysis. Most people believe that choice and preference are basic features of human nature. Our customary way of speaking implies that people make decisions on the basis of their knowledge and dispositions. In contrast, behavioral studies of decision making suggest that we choose an option based on its rate of return compared with alternative sources of reinforcement.
Behaviorists have spent the last thirty years studying choice in the laboratory using concurrent schedules of reinforcement. The word concurrent means "operating at the same time." Thus, concurrent schedules are two (or more) schedules operating at the same time, each schedule providing reinforcement independently. The experimental setting is arranged so that an organism is free to alternate between two or more alternatives. Each alternative provides a schedule of reinforcement for choosing it over the other possibilities. A person may choose between two (or more) response buttons that have different rates of monetary payoff. Although the experimental setting is abstract, concurrent schedules of reinforcement provide an analogue of choice in everyday life.
People are often faced with a variety of alternatives, and each alternative has its associated benefits (and costs). When a person puts money in the bank rather than spending it on a new car, television, or refrigerator, we speak of the individual choosing to save rather than spend. In everyday life, choice often involves repeated selection of one alternative (e.g. putting money in the bank) over the other alternatives considered as a single option (e.g. buying goods and services). Similarly, the criminal chooses to take the property of others rather than take the socially acceptablePage 213 | Top of Article route of working for a living or accepting social assistance. The arrangement of consequences for crime and legitimate ways of making a living is conceptually the same as concurrent schedules of reinforcement (Hamblin and Crosbie 1977).
Behaviorists are interested in the distribution or allocation of behavior when a person is faced with different rates of reinforcement from two (or more) alternatives. The distribution of behavior is measured as the relative rate of response to, or relative time spent on, a specific option. For example, a student may go to school twelve days and skip eight days each month (not counting weekends). The relative rate of response to school is the proportion of the number of days at school to the total number of days, or 12/20 = 0.60. Expressed as a percentage, the student allocates 60 percent of her behavior to school. In the laboratory, a person may press the left button twelve times and the right button eight times each minute.
The distribution of reinforcement may also be expressed as a percentage. In everyday life, it is difficult to identify and quantify behavioral consequences, but it is easily accomplished in the laboratory. If the reinforcement schedule on the left button produces $30 an hour and the right button yields $20, 60 percent of the reinforcements are on the left. There is a fundamental relationship between relative rate of reinforcement and relative rate of response. This relationship is called the matching law. The law states that the distribution of behavior to two (or more) alternatives matches (equals) the distribution of reinforcement from these alternatives (Herrnstein 1961; de Villiers 1977).
Although it is difficult to identify rates of reinforcement for attending school and skipping, the matching law does suggest some practical solutions (Epling and Pierce 1988). For instance, parents and the school may be able to arrange positive consequences when a child goes to school. This means that the rate of reinforcement for going to school has increased, and therefore the relative rate of reinforcement for school has gone up. According to the matching law, a child will now distribute more behavior to the school.
Unfortunately, there is another possibility. A child may receive social reinforcement from friends for skipping, and as the child begins to spend more time at school, friends may increase their rate of reinforcement for cutting classes. Even though the absolute rate of reinforcement for going to school has increased, the relative rate of reinforcement has remained the same or decreased. The overall effect may be no change in school attendance or even further decline. In order to deal with the problem, the matching law implies that interventions must increase reinforcement for attendance and maintain or reduce reinforcement for skipping, possibly by turning up the cost of this behavior (e.g., withdrawal of privileges).
The matching law has been tested with human and nonhuman subjects under controlled conditions. One interesting study assessed human performance in group discussion sessions. Subjects were assigned to groups discussing attitudes toward drug abuse. Each group was composed of three confederates and a subject. Two confederates acted as listeners and reinforced the subject's talk with brief positive words and phrases, provided on the basis of cue lights. Thus, the rate of reinforcement by each listener could be varied depending on the number of signals arranged by the researchers. A third confederate asked questions but did not reinforce talking. Results were analyzed in terms of the relative time subjects spent talking to the two listeners. Speakers matched their distribution of conversation to the distribution of positive comments from the listeners. Apparently, choosing to speak to others is behavior that is regulated by the matching law (Conger and Kileen 1974).
Researchers have found that exact matching does not always hold between relative rate of reinforcement and relative rate of response. A more general theory of behavioral matching has been tested in order to account for the departures from perfect matching. One source of deviation is called response bias. Bias is a systematic preference for an alternative, but the preference is not due to the difference in rate of reinforcement. For example, even though two friends provide similar rates of reinforcement, social characteristics (e.g., status and equity) may affect the distribution of behavior (Sunahara and Pierce 1982). Generalized matching theory is able to address many social factors as sources of bias that affect human choice and preference (Baum 1974; Pierce and Epling 1983; Bradshaw and Szabadi 1988).
A second source of deviation from matching is called sensitivity to differences in reinforcement.Page 214 | Top of Article Matching implies that an increase of 10 percent (e.g., from 50 to 60 percent) in relative rate of reinforcement for one alternative results in a similar increase in relative rate of response. In many cases, the increase in relative rate of response is less than expected (e.g., only 5 percent). This failure to discriminate changes in relative rate of reinforcement is incorporated within the theory of generalized matching. To illustrate, low sensitivity to changes in rate of reinforcement may occur when an air-traffic controller rapidly switches between two (or more) radar screens. As relative rate of targets increases on one screen, relative rate of detection may be slow to change. Generalized matching theory allows behaviorists to measure the degree of sensitivity and suggests procedures to modify it (e.g., setting a minimal amount of time on a screen before targets can be detected).
Matching theory is an important contribution of modern behaviorism. In contrast to theories of rational choice proposed by economists and other social scientists, matching theory implies that humans may not try to maximize utility (or reinforcement). People (and animals) do not search for the strategy that yields the greatest overall returns; they equalize their behavior to the obtained rates of reinforcement from alternatives. Research suggests that matching (rather than maximizing) occurs because humans focus on the immediate effectiveness of their behavior. A person may receive a per-hour average of $10 and $5 respectively from the left and right handles of a slot machine. Although the left side generally pays twice as much, there are local periods when the left option actually pays less than the right. People respond to these changes in local rate of reinforcement by switching to the lean alternative (i.e., the right handle), even though they lose money overall. The general implication is that human impulsiveness ensures that choice is not a rational process of getting the most in the long run but a behavioral process of doing the best at the moment (Herrnstein 1990).
MATHEMATICS AND BEHAVIOR MODIFICATION
The matching law suggests that operant behavior is determined by the rate of reinforcement for one alternative relative to all other sources of reinforcement. Even in situations that involve a single response on a schedule of reinforcement, the behavior of organisms is regulated by alternative sources of reinforcement. A rat that is pressing a lever for food may gain additional reinforcement from exploring the operant chamber, scratching itself, and so on. In a similar fashion, rather than work for teacher attention a pupil may look out the window, talk to a friend, or even daydream. Thus in a single-operant setting, multiple sources of reinforcement are functioning. Herrnstein (1970) argued this point and suggested an equation for the single operant that is now called the quantitative law of effect.
Carr and McDowell (1980) applied Herrnstein's equation to a clinically relevant problem. The case involved the treatment of a 10-year-old boy who repeatedly and severely scratched himself. Before treatment the boy had a large number of open sores on his scalp, face, back, arms, and legs. In addition, the boy's body was covered with scabs, scars, and skin discoloration. In their review of this case, Carr and McDowell demonstrated that the boy's scratching was operant behavior. Careful observation showed that the scratching occurred more often when he and other family members were in the living room watching television. This suggested that a specific situation set the occasion for the self-injurious behavior. Further observation showed that family members repeatedly and reliably reprimanded the boy when he engaged in self-injury. Reprimands are seemingly negative events, but adult attention (whether negative or positive) can serve as reinforcement for children's behavior.
In fact, McDowell (1981) showed that the boy's scratching was in accord with Herrnstein's equation (i.e., the quantitative law of effect). He plotted the reprimands per hour on the x-axis and scatches per hour on the y-axis. When applied to this data, the equation provided an excellent description of the boy's behavior. The quantitative law of effect also suggested how to modify the problem behavior. In order to reduce scratching (or any other problem behavior), one strategy is to increase reinforcement for alternative behavior. As reinforcement is added for alternative behavior, problem behavior must decrease; this is because the reinforcement for problem behavior is decreasing (relative to total reinforcement) as reinforcement is added to other (acceptable) behavior.
APPLIED BEHAVIOR ANALYSIS AND
The application of behavior principles to improve performance and solve social problems is called applied behavior analysis (Baer, Wolf, and Risley 1968). Principles of behavior change have been used to improve the performance of university students, increase academic skills in public and high school students, teach self-care to developmentally delayed children, reduce phobic reactions, get people to wear seat belts, prevent industrial accidents, and help individuals stop cocaine abuse, among other things. Behavioral interventions have had an impact on such things as clinical psychology, medicine, education, business, counseling, job effectiveness, sports training, the care and treatment of animals, environmental protection, and so on. Applied behavioral experiments have ranged from investigating the behavior of psychotic individuals to designing contingencies of entire institutions (see Catania and Brigham 1978; Kazdin 1994).
One example of applied behavior analysis in higher education is the method of personalized instruction. Personalized instruction is a self-paced learning system that contrasts with traditional lecture methods that often are used to instruct college students. In a university lecture, a professor stands in front of a number of students and talks about his or her area of expertise. There are variations on this theme (e.g., students are encouraged to be active rather than passive learners), basically the lecture method of teaching is the same as it has been for thousands of years.
Dr. Fred Keller (1968) recognized that the lecture method of teaching was inefficient and in many cases a failure. He reasoned that anyone who had acquired the skills needed to attend college was capable of successfully mastering most or all college courses. Some students might take longer than others to reach expertise in a course, but the overwhelming majority of students would be able to do so. If behavior principles were to be taken seriously, there were no bad students, only ineffective teaching methods.
In a seminal article, titled "Good-bye, teacher. . . ," Keller outlined a college teaching method based on principles of operant conditioning (Keller 1968). Keller's personalized system of instruction (PSI) involves arranging the course material in a sequence of graduated steps (units or modules). Each student moves through the course material at his or her own pace and the modules are set up to ensure that most students have a high rate of success learning the course content. Some students may finish the course in a few weeks, others require a semester or longer.
Course material is broken down into many small units of reading and (if required) laboratory assignments. Students earn points (conditioned reinforcement) for completing unit tests and lab assignments. Mastery of lab assignments and unit tests is required. If test scores are not close to perfect, the test (in different format) must be taken again after a review of the material for that unit. The assignments and tests build on one another so they must be completed in a specified order.
Comparison studies have evaluated student performance on PSI courses against the performance of students given computer-based instruction, audio-tutorial methods, traditional lectures, visual-based instruction, and other programmed instruction methods. College students instructed by PSI outperform students taught by these other methods when given a common final examination (see Lloyd and Lloyd 1992 for a review). Despite this positive outcome, logistical problems in organizing PSI courses such as teaching to mastery level (most students get an A for the course), and allowing students more time than the allotted semester to complete the course, have worked against widespread adoption of PSI in universities and colleges.
Modern behaviorism emphasizes the context of behavior and reinforcement. The biological history of an organism favors or constrains specific environment-behavior interactions. This interplay of biology and behavior is a central focus of behavioral research. Another aspect of context concerns alternative sources of reinforcement. An individual selects a specific option based on the relative rate of reinforcement. This means that behavior is regulated not only by its consequences but also by the consequences arranged for alternative actions. As we have seen, the matching law and the quantitative law of effect are major areas of basic research that suggest new intervention strategies for behavior modification. Finally, applied behaviorPage 216 | Top of Article analysis, as a technology of behavior change, is having a wide impact on socially important problems of human behavior. A personalized system of instruction is an example of applied behavior analysis in higher education. Research shows that mastery-based learning is more effective than alternative methods of instruction, but colleges and universities its implementation.
Baer, D.M., M.M. Wolf, and T. R. Risley 1968 "Some
Current Dimensions of Applied Behavioral Analysis." Journal of Applied Behavior Analysis 1:91–97.
Bandura, A. 1986 Social Foundations of Thought and Action. Englewood Cliffs, N.J.: Prentice Hall.
Baum, W. M. 1974 "On Two Types of Deviation from the Matching Law: Bias and Undermatching." Journal of the Experimental Analysis of Behavior 22:231–242.
Bem, D. J. 1965 "An Experimental Analysis of Self-Persuasion." Journal of Experimental Social Psychology 1:199–218.
——1972 "Self-Perception Theory." In L. Berkowitz, ed., Advances in Experimental Social Psychology, Vol. 6. New York: Academic Press.
Bradshaw, C. M., and E. Szabadi 1988 "Quantitative Analysis of Human Operant Behavior." In G. Davey and C. Cullen, eds. Human Operant Conditioning and Behavior Modification. New York: Wiley.
Carr, E. G., and J. J. McDowell 1980 "Social Control of Self-Injurious Behavior of Organic Etiology." Behavior Therapy 11:402–409.
Catania, C. A., and T. A. Brigham, eds. 1978 Handbook of Applied Behavior Analysis: Social and Instructional Processes. New York: Irvington Publishers.
Conger, R., and P. Killeen 1974 "Use of Concurrent Operants in Small Group Research." Pacific Sociological Review 17:399–416.
De Villiers, P. A. 1977 "Choice in Concurrent Schedules and a Quantitative Formulation of the Law of Effect." In W. K. Honig and J. E. R. Staddon, eds., Handbook of Operant Behavior. Englewood Cliffs, N.J.: Prentice-Hall.
Emerson, R. M. 1972 "Exchange Theory Part 1: A Psychological Basis for Social Exchange." In J. Berger, M. Zelditch, Jr., and B. Anderson, eds. Sociological Theories in Progress 38–57. Boston: Houghton Mifflin Co.
Epling, W. F., and W. D. Pierce 1988 "Applied Behavior Analysis: New Directions from the Laboratory." In G. Davey and C. Cullen, eds., Human Operant Conditioning and Behavior Modification. New York: Wiley.
Fantino, E., and C.A. Logan 1979 The Experimental Analysis of Behavior: A Biological Perspective. San Francisco: W. H. Freeman.
Herrnstein, R. J. 1961 "Relative and Absolute Response Strength as a Function of Frequency of Reinforcement." Journal of the Experimental Analysis of Behavior 4:267–272.
——1970 "On the Law of Effect." Journal of the Experimental Analysis of Behavior 13:243–266.
——1990 "Rational Choice Theory: Necessary but Not Sufficient." American Psychologist 45:356–367.
Homans, G. C. 1974 Social Behavior: Its Elementary Forms, rev. ed. New York: Harcourt Brace Jovanovich.
Kazdin, A. E. 1994 Behavior Modification in Applied Settings. New York: Brooks/Cole Publishing.
Keller, F. S. 1968 "Good-bye teacher. . ." Journal of Applied Behavior Analysis 1:79–89.
Lloyd, K. E., and M. E. Lloyd 1992 "Behavior Analysis and Technology of Higher Education." In R. P. West and L. A. Hamerlynck, eds., Designs for Excellence in Education: The Legacy of B. F. Skinner 147–160. Longmont, Colo.: Spores West, Inc.
McDowell, J. J. 1981 "On the Validity and Utility of Herrnstein's Hyperbola in Applied Behavior Analysis." In C. M. Bradshaw, E. Szabadi and C. F. Lowe, eds., Quantification of Steady-State Operant Behaviour 311–324. Amsterdam: Elsevier/North-Holland.
——1988 "Matching Theory in Natural Human Environments." The Behavior Analyst 11:95–109.
McLaughlin, B. 1971 Learning and Social Behavior. New York: Free Press.
Pierce, W. D., and W. F. Epling 1983 "Choice, Matching, and Human Behavior: A Review of the Literature." The Behavior Analyst 6:57–76.
——1984 "On the Persistence of Cognitive Explanation: Implications for Behavior Analysis." Behaviorism 12:15–27.
——1999 Behavior Analysis and Learning. Upper Saddle River, N.J.: Prentice-Hall.
——, W.F. Epling, and D. Boer 1986 "Deprivation and Satiation: The Interrelations Between Food and Wheel Running." Journal of the Experimental Analysis of Behavior 46:199–210.
Skinner, B.F. 1953 Science and Human Behavior. New York: Free Press.
——1957 Verbal Behavior. New York: Appleton Century Crofts.
——1969 Contingencies of Reinforcement: A Theoretical Analysis. New York: Appleton Century Crofts.
——1974 About Behaviorism. New York: Knopf.
——1987 "Selection by Consequences." In B.F. Skinner, Upon Further Reflection. Englewood Cliffs, N.J.: Prentice-Hall.
Sunahara, D., and W. D. Pierce 1982 "The Matching Law and Bias in a Social Exchange Involving Choice Between Alternatives." Canadian Journal Of Sociology 7:145–165.
Zeiler, M. 1977 "Schedules of Reinforcement: The Controlling Variables." In W.K. Honig and J.E.R. Staddon, eds., Handbook of Operant Behavior. Englewood Cliffs, N.J.: Prentice-Hall.
Zuriff, G.E. 1985 Behaviorism: A Conceptual Reconstruction. New York: Columbia University Press.
W. DAVID PIERCE