Chapter 4: Behaviorism and the Technology of Teaching

From Pai, Young (1973) Teaching, Learning and the Mind. Boston; Houghton-Mifflin. pgs. 94-117

 

 

In this chapter we shall concentrate primarily on Burrhus F. Skinner's (1904‑ ) view of teaching and learning, because he, more than any other experimental psychologist, has been explicitly concerned with the systematic application to teaching of recent advances in the experimental analysis of learning. Current interest in programmed instruction, in teaching machines, and in the use of behavioral objectives is the visible effect of Skinner's increasing influence on education. What makes Skinner so significant among, as well as distinct from, the men we have discussed so far is that his main interest lies in the development of a technology rather than a theory of teaching. By the term technology is meant "the means of getting a job done, whatever the means and the job happen to be."(1) So in Skinner's case the "technology of teaching" refers to the processes of finding and arranging conditions for learning as well as using physical science and mechanical and electronic devices to make such arrangements more efficient and effective. Thus, teaching is seen as the expediting of learning(2) rather than as an act or an aggregate of acts which actually transmits something to the learner. This suggests that a technology of teaching must be founded on reliable empirical knowledge of human behavior. As we shall soon see, much of Skinner's thought on teaching is directly related to his scientific account of learning. Therefore, in the following pages we shall examine Skinner's view of learning (operant conditioning) together with its implications for teaching. A brief study of other behavioristic views will follow this section. The philosophical basis of Skinner's descriptive or radical behaviorism will be discussed in Chapter 5.

 

OPERANT CONDITIONING

 

According to Skinner there are two basically different classes of behavior. One is respondent behavior, which is elicited by known, specific stimuli. These responses are reflexive and therefore, involuntary. Given the stimulus the response occurs automatically. Bright light and pupillary constriction, a blow on the patellar tendon and the knee jerk, are familiar examples of the connections between stimulus and respondent behavior or reflexes. Some of our reflexes are present at the time of birth, while others are acquired later through conditioning. Conditioning is the process by which an originally inadequate stimulus becomes capable of producing a response after it has been paired with a stimulus adequate in eliciting that specific response. The adequate stimulus and the response to it are called unconditioned stimulus and unconditioned response, respectively. The inadequate stimulus is known as a conditioned stimulus, while a response to it is referred to as a conditioned response. For example Ivan Pavlov (1849‑1936), a Russian physiologist, found in a now famous experiment that when food (an unconditioned stimulus) is presented to a dog it leads to salivation (an unconditioned response). He also discovered that by presenting food (an unconditioned stimulus) with a tone (a conditioned stimulus), the latter became capable of causing salivation (a conditioned response) without food being present. Skinner calls this Type S conditioning. In Type S conditioning a specific stimulus is presented to induce a response, and therefore the stimulus always precedes the response. It is to this type of conditioning process that John B. Watson (1878‑1958), a leading exponent of early behaviorism, attributed the learning of all responses.

 

Unlike Watson, Skinner maintained that operant responses account for most of human behavior. Operant responses are emitted by, but not elicited from, the organism, and since they are not induced by stimuli they are voluntary in nature. The term operant is used to emphasize "the fact that the behavior operates upon the environment to generate consequences."(3) In an experiment a rat's lever‑pressing response may be made to occur more often by rewarding the rat with food after a correct response. Thus the response is strengthened or reinforced by the consequence following it. This then is called Type R conditioning in which reinforcement cannot occur unless a response occurs first. Therefore, reinforcement is said to be contingent upon responses. The learning of an operant response is called operant or instrumental conditioning and it differs from Type S or classical conditioning. Skinner points out that

 

in operant conditioning we strengthen an operant in the sense of making a response more probable or, in actual fact, more frequent. In Pavlovian or "respondent" conditioning we simply increase the magnitude of the response elicited by the conditioned stimulus and shorten the time which elapses between stimulus and response.(4)

 

It is then the consequence following a response that increases the rate at which an operant response is emitted and the operant strength is indicated by this change in the probability of an operant. The following brief account of a Skinner experiment further illustrates the essentials of operant conditioning:

 

A hungry rat [is placed in an] experimental space which contains a food dispenser. A horizontal bar at the end of a lever projects from one wall. Depression of the lever operates a switch. When the switch is connected with the food dispenser any behavior on the part of the rat which depresses the lever is, as we say, “reinforced with food.” The apparatus simply makes the appearance of food contingent upon the occurrence of an arbitrary bit of behavior. Under such circumstances the probability that a response to the lever will occur again is increased.(5)

 

What we have here is Skinner's own formulation of Thorndike's Law of Effect, which asserts that when a response is followed by a satisfying state of affairs the strength of their connection is increased, while the strength of the stimulus‑response bond is decreased when the response is followed by an annoying state of affairs.(6) A major difference between Thorndike and Skinner is that the latter is unwilling to use such mentalistic terms as "satisfying” and "annoying" in his explanations, because he holds that a scientific study of human behavior should only describe the observable responses. Therefore, Skinner's Version of the Law of Effect simply states that when a response is followed by certain consequences the response tends to appear more frequently. But since not all consequences of a response are reinforcing should we not attempt to find out why some consequences do and others do not strengthen a response? Here Skinner cautions us against speculating about the "whys" of behavior, because

 

the only way to tell whether or not a given event is reinforcing to a given organism under given conditions  is to make a direct test. We observe the frequency of a selected response, then make in event contingent upon it and observe any change in frequency. If there is a change, we classify the event as reinforcing to the organism under existing Conditions.(7)

 

In other words, a reinforcer is whatever increases the probability of a response. For example, verbal praises, grades, or gold stars given for reading, or even the teacher’s smiles which make the learner to behave in a desired way more frequently may be called reinforcers.. But we do not know why a reinforcer strengthens a response. We only know that some events are reinforcing. Skinner's reluctance to deal with the "why" questions stems from his belief that explanation of an observed fact, i.e., human behavior, should not appeal to "events taking place somewhere else, at some other level of observation, described in different terms, if at all, in different dimensions."(8) Therefore, our knowledge about learning should be based solely on a descriptive study of the variables under which learning occurs without relying on the mental or the physiological processes, because neither of them is accessible to direct observation. It is this kind of empirical knowledge which will enable us to actually shape behavior “ as a sculptor shapes a lump of clay.”(9) And by arranging appropriate contingencies of reinforcement, or the sequence in which responses are followed by reinforcing events, we call maintain the shaped behavior for a long period of time. Similarly, a complex behavior can be shaped by following a carefully designed program of gradually changing contingencies of reinforcement, which will form small units of behavior thereby successively approximating the desired response.

 

Now that we have examined the role of reinforcement in operant conditioning generally, we are ready to take a more detailed look at the ways in which various types of reinforcements and punishment affect the teaching‑learning process.

 

REINFORCEMENT

 

Positive and Negative Reinforcements

 

As was pointed out earlier, Skinner found no useful answers to the question "Wily is a reinforcer reinforcing?" But we do know that some things are reinforcing when they are present in a situation, while others strengthen an operant response when they are withdrawn. A positive reinforcer is, then, a stimulus which, when added to a situation, increases the probability of a response, while a negative reinforcer is any event, which when withdrawn, produces the same effect. For instance, an increased appearance of the rat's lever‑pressing response as a result of presentation of food following the response is a case of positive reinforcement. Withdrawal of electric shocks which results in the increased performance of a pigeon's pecking activity illustrates negative reinforcement. What we must remember about these reinforcers is that, whether positive or negative, they are both defined in terms of their effect, i.e., strengthening of a response. Hence, we must not confuse negative reinforcement with punishment, which is a basically different process from reinforcement. What is commonly called punishment involves either withdrawal of a positive stimulus, e.g., food, or presentation of a negative stimulus, e.g., an electric shock.

 

Conditioned Reinforcement

 

When a stimulus, let us say a plate, which originally does not have any reinforcing power is paired with a reinforcing (primary) stimulus such as food, the former frequently acquires the same reinforcing property as the primary stimulus. This process is called conditioned reinforcement and the plate, in this case, is called a conditioned reinforcer.(10) Conditioned reinforcers are often the result of natural contingencies, i.e., food is usually presented on a plate. Now, when conditioned reinforcers are paired with more than one primary reinforcer the conditioned reinforcers are said to be generalized.(11) Money is a good example of a generalized reinforcer, for it enables us to secure food, clothing, shelter, and entertainment. Students behave or study for grades, scholarships, or diplomas, which are not as readily exchanged with other primary reinforcers as is money. But they are indeed exchangeable with high‑paying jobs and prestige. The practice of tokenism as seen in our schools today is an excellent example of the use of generalized reinforcers. The term tokenism refers to the practice of giving tokens to children for certain acts and/or achievements and allowing them to cash in the tokens for extra recess periods or other activities of their choice. Attention, affection, approval, and permissiveness are examples of other kinds of generalized reinforcers. The teacher's attention is reinforcing, because it is a necessary condition for other reinforcements from him. And the child cannot receive any reinforcement from his teacher unless he can attract the teacher's attention. This suggests that anything that attracts the attention of teachers and parents, who are likely to supply other rewards, will be reinforced. Of course, attention alone is not enough, because the teacher tends to reinforce only those acts which he approves. Consequently, the responses, such as submissiveness, which lead to such signs of approval as a smile or verbal praise will be strengthened.

 

Generalization of reinforcement is particularly important in teaching because stimulus induction or transfer of learning takes place through this process. Transfer of learning is said to have occurred when "the reinforcement of a response increases the probability of all responses containing the same elements.”(12) As an illustration, if we reinforce a pigeon's response of pecking a yellow round spot one square inch in area, the effect of this reinforcement will spread so that the pigeon will peck a red spot of the same size and shape because of the common properties of size and shape. The pigeon will also respond to a yellow square spot of one square inch in area because of its color and size and to a yellow round spot two square inches in area because of the similar elements of color and shape. What all this means is that in order for transfer of learning to occur the learner must be able to perceive similarities between the original and the new stimulus situations. Though Skinner might not agree with this statement ‑ because an organism's perception of similarity is not an observable event ‑ instances of inappropriate behavior due to misperception of similarities in certain situations are abundant.

 

Schedules of Reinforcement

 

Much of human behavior is shaped through operant conditioning. But the ways in which operant responses are shaped in everyday life are slow and inefficient, mainly because reinforcements of these responses do not occur in either a regular or a uniform manner. Thus if we are to be effective and efficient in shaping and maintaining desired responses, we must construct schedules of reinforcement. Such schedules are especially important in forming a complex behavior, which must be shaped gradually through selective reinforcement of certain responses but not others.

 

The schedule in which reinforcement follows every response is called continuous reinforcement. This schedule is generally used in getting an organism to emit the desired response. But very rarely are we reinforced continuously. We do not win every time we play a game of chess nor do we catch fish every time we go fishing. "The reinforcements characteristic of industry and education are almost always intermittent because it is not feasible to control behavior by reinforcing every response.”(13)   Hence, in intermittent reinforcement only some of the responses are followed by reinforcing events. If reinforcement is regular, say at two‑ or five‑minute intervals, it is called interval reinforcement. In this schedule the rate of responding is determined by the frequency of reinforcement. If we reinforce a response every two minutes, the response occurs more frequently than if reinforcements are presented every five minutes. Another kind of intermittent schedule is ratio reinforcement in which the frequency of reinforcement depends on the rate at which operant responses are emitted. So, if we decide to reinforce every third response it is called reinforcement at a fixed ratio. Students receiving grades upon completion of a paper, a salesman selling on commission, and a workman's piecework pay are all examples of fixed ratio reinforcement. Of course, interval and ratio schedules call be combined so that responses can be strengthened according to the passage of time as well as the number of unreinforced responses emitted. Skinner reports that there are sufficient experimental data to suggest that generally the organism gives back a certain number of responses for each response reinforced, implying that there is a direct relationship between the frequency of response and the frequency of reinforcement. But now what happens to responses if they are not reinforced?

 

The effect of a nonreinforcing situation is called operant extinction. In other words, if a response is not followed by any reinforcement for a period, the response becomes less and less frequent until eventually it completely ceases. Thus, unrewarded acts of children often cease to occur, and though operant extinction takes place much more slowly than operant conditioning, it is still an effective means of removing an unwanted behavior from the organism's repertoire. However, extinction should not be confused with forgetting, because “in forgetting, the effect of conditioning is lost simply as time passes, whereas extinction requires that the response be emitted without reinforcement.”(14)

 

DRIVES AND EMOTIONS

 

In explaining a man's behavior we often attribute his actions to certain drives he is supposed to have. We might say John ate a lot of food to satisfy his hunger drive or that he drank a quart of lemonade to quench his thirst. But Skinner does not regard drives as stimuli which causally effect the rate at which responses are emitted. Only in a metaphorical sense do drives cause our actions. As Ernest R. Hilgard, a contemporary American learning theorist, reiterates, "the word drive is used [by Skinner] only to acknowledge certain classes of operations which affect behavior in ways other than the ways by which reinforcement affects it.”(15) Hence, drive is simply a convenient way of referring to the effects of deprivation and satiation. Deprivation or the hunger drive can be defined operationally by withholding food from all organism, say a rat, to the point where the rat reaches about 80 percent of its normal body weight, while satiation call be demonstrated by feeding the rat until it no longer takes any food. In terms of their effects, deprivation usually strengthens a response but satiation decreases the rate of a response. These operations can be applied to practical situations, for instance by prohibiting a child from having snacks so he will eat well at the regular meal time, or by serving large portions of salad and bread before the main course so that a rather skimpy dinner can be served without complaint. Skinner believes that drives should not be treated as special inner states causing overt responses. Similarly, emotions are not inner causes of behavior. As Skinner insists, the terms anger, love, and hate are different ways of talking about a person's predispositions to act in certain ways, because

 

The names of the so‑called emotions serve to classify behavior with respect to various circumstances which affect its probability. The safest practice is to hold to the adjectival form. Just as the hungry organism can be accounted for without too much difficulty . . ., so by describing behavior as fearful, affectionate, timid, and so on, we are not led to look for things called emotions. The common idioms, "in love," "in fear," and "in anger" suggest a definition of an emotion as a conceptual state, in which a special response is a function of circumstances in the history of the individual.(16)

 

In consonance with Skinner's account of drives and emotions, motivation, too, should not be thought of as an inner force propelling an organism to action. It is merely an expression which conveniently covers deprivation and satiation.

 

OPERANT CONDITIONING AND THE TECHNOLOGY OF TEACHING

 

Teaching and the Problem of the First Instance

 

As Skinner himself has put it so explicitly, the application of the principles of operant conditioning to teaching is simple and direct, for teaching is a matter of arranging contingencies of reinforcement under which students learn. They do, of course, learn without being taught, but by providing appropriate learning conditions we can speed up the occurrence of behavior which would have either appeared very slowly or not appeared at all. In this sense, the teacher does not actually pass along some of his own behavior; he builds or helps to construct the behavior of the student, who is induced to engage in forms of behavior appropriate to certain occasions. And since operant conditioning is the process by which man learns all of his voluntary behavior, the technology of teaching becomes a matter of providing and arranging the necessary conditions with the help of mechanical devices, electronic instruments, and schedules of reinforcement so that desired learning can occur efficiently and effectively. Teachers must then help their students to reach an appropriate instructional objective by progressive approximation. This means using reinforcement to form small units of desired terminal behavior. In operant conditioning, reinforcement cannot be presented unless the responses have actually occurred. In other words, we must wait until a desired response appears so that it can be strengthened. But in education many and complex terminal behaviors must be established within a limited period of time. Therefore, it would be tedious and inefficient for teachers to wait for desired responses to appear. In fact, some responses may never take place Without some form of deliberate inducement. How to bring about the wanted behavior without simply waiting for it thus becomes "the problem of the first instance."

 

Skinner indicates a number of possible solutions to the problem of the first instance. One is to force a behavior physically, as we often squeeze a child's hand around a pencil and move it to form letters. Unfortunately, the child is not writing in any real sense and if he does learn to write there are probably other variables at work. Another possibility is to evoke a response by some stimulus. For example, a teacher may raise his hand or wave in object conspicuously to induce his students to pay attention to his storytelling. This technique, too, has a weakness in that the elicited attention is not the attention the students eventually learn. Consequently, these two measures are useful only in a small range of teaching‑learning situations. A more effective technique is to prime certain desired responses. Primed behavior can be induced through such procedures as movement duplication, product duplication, and nonduplicative repertoires.

 

Movement Duplication. Skinner seems to be convinced that man has an innate tendency to behave as he has observed others behave. When a person acts as others do, he is naturally reinforced. Hence, the teacher can utilize his students' tendency toward imitative behavior by reinforcing those responses which resemble the responses of a model, often the teacher himself. Such movement duplicating contingencies are most effectively acquired when the model is conspicuous.”(17) The teacher as a model can repeat the desired responses slowly and even with exaggeration. A student's imitative response can be made conspicuous by recording his speech or letting him watch himself in a mirror or by video taping the responses. Examples of movement duplication can be found in drama, physical education, and dancing courses where students are made to "copy" the teacher's gestures and movements.

 

Product Duplication.  Movements cannot be imitated readily if the model's actions cannot be seen. Of course the effects of a model's movements can be duplicated and therefore the movements of the learner and the model need not be similar. We can learn to pronounce certain words, deliver a line in a play, or paint a picture by imitating a model, the teacher, without actually seeing how the model himself has performed these acts. What is important here is that the outcome, that is the product, be similar to that of the model's, but not the movements. Learning to speak a foreign language with records or copying a singer's style by listening to a recording are examples of product duplication. Again, product duplicating contingencies are made more effective if the model and the product are as clear as possible. For instance, a foreign language student may be allowed to listen to his own pronunciation through earphones and a tape recorder. The modern language laboratory is an excellent example of mechanical devices helping to improve product duplicating contingencies.

 

Nonduplicative Repertoires.  In Skinner's own words, "behavior may also be primed with the help of pre‑established repertoires in which neither the responses nor their products resemble controlling stimuli."(18) To put it simply, we can tell the student what to do or how to act and then reinforce him when he acts according to our instruction. What we are doing is giving a verbal instruction to evoke a certain response with the help of behavior patterns which have already been established. Though the evoked response is different from the established response, through the latter's help we call give the student a “picture” of what lie must do. This technique is certainly more efficient and convenient than shaping behavior by progressive approximation or by product or movement duplication.

 

Of course, the techniques of priming-behavior do not replace other means of shaping behavior. But they do help us with the initial stages of establishing desired behavior and hence they are useful tools in the early phase of teaching. Skinner reminds us that we should not mistake simple execution of behavior as learning. Teachers often become satisfied merely if their students repeat after them because the student's imitative behavior is often reinforcing for the teacher. Skinner warns us that

 

students [can] make the same mistake when they study. They take notes during a lecture or when reading a book, they recognize, transcribe, and outline them, they underline words to serve as primes and then read them with special intensity. In so doing they respond to priming stimuli and emit behavior of the proper form. But they are not necessarily bringing that behavior under the control of new variables.(19)

 

In short, learning takes place because behavior is reinforced, but not merely because it has been primed. Learning can be said to have occurred only if the learner can make similar responses on his own.

 

The reinforcers used in most formal learning situations to establish desired behaviors are artificial in the sense that they are deliberately contrived. Therefore, in school, grades and praise are used to reinforce those responses which make up our teaching objectives. Such artificial reinforcers are necessary because natural reinforcers take too long a time to be effective. For example, no student learns to think critically because he can immediately win in a debating contest, nor do children learn to plant seeds because they are promptly reinforced by the resulting harvest. In fact,

 

the human race has been exposed to the real world for hundreds of thousands of years; only slowly has it acquired a repertoire [of responses] which is effective in dealing with that world. Every step in the slow advance must have been the result of fortunate contingencies, accidentally programmed. Education is designed to make such accidents unnecessary.... The natural contingencies used in education must almost always be rigged.(20)

 

Any teacher who relies solely on natural contingencies of reinforcement has given up his role as a teacher, for to expose the student to his environment gives no guarantee that the student's behavior will be followed by any reinforcing event. Therefore, contrived reinforcers are essential in learning and the work of arranging an effective sequence of such reinforcers should take up much teaching activity.

 

Programmed Instruction and Teaching Machines

 

If by using priming techniques we have been successful in getting our students to execute certain behavior, we have begun the process of shaping terminal behavior, i.e., teaching objectives. We must then arrange a great many contingencies of reinforcement in order that the students can perform the same act on their own and maintain it. Clearly, teaching in the context of schooling involves many extremely complex terminal behaviors. Even at the elementary school level most instructional objectives go far beyond such relatively simple tasks as making letters or coloring pictures. As we go up the educational ladder, objectives become more involved and subtler. Behaviors of such great complexity cannot be learned all at once, but must be formed through programmed instruction. Programmed instruction is a process of successively approximating teaching objectives by making an efficient use of reinforcers to establish, maintain, and strengthen desired responses. As Skinner cautions us, in programming it is important that the learner "understand" each step before he moves on to the next. This means the learner stays in one stage until he masters what he has to learn to move on to the next stage. At least in principle, however, programmed instruction is more than a matter of shaping terminal behaviors simply by dividing it into smaller units and reinforcing them one by one. A subject, or a skill such as critical thinking, is more than a mere aggregate of individual responses, for the smaller units are related to each other in such a way that they, as a whole, possess varying degrees of coherence and consistency. Moreover, in programmed instruction each new unit of learned behavior should add to already established behaviors in a cumulative way so that terminal behaviors can be reached successively. Consequently, it is not enough that the smaller units are of proper size; they must also follow an effective sequence. For example, very rarely can we arrange the various parts of a subject in a line, because they usually form a network or a "tree." In other words, the student has to cover many different segments of a subject matter at the same time. Hence, "the steps in a segment must be arranged in order, and segments must be arranged so that the student is properly prepared for each when he reaches it.”(21) Putting a subject in sequence can be done according to the complexity of the materials, or the difficulty of the terminal behavior, or the logical structure of the subject, or a natural order inherent in the subject (e.g., history can be taught as a chronological sequence of events). Unfortunately, none of these approaches to sequence has proven itself consistently useful. As Skinner points out, the most advantageous and effective programming is accomplished when sequence is based on the teacher's knowledge of the student's attainment and direction. Consequently, "arranging effective sequences is [a] good part of the art of teaching.”(22)

 

As Skinner reports in his Cumulative Record,  he has recorded many millions of responses from a single organism during thousands of experimental hours.(23) In Schedule of Reinforcement, published in 1957, Skinner summarized about 70,000 hours of the continuously recorded behavior of individual pigeons, consisting of approximately one quarter of a billion responses. These data were presented in 921 separate charts and tables with almost no interpretive or summarizing comments. The sheer number of responses which make up behavior makes clear that any personal attempt at an effective arrangement of contingencies without some sort of mechanical device is unthinkable. If the control of animal behavior requires an elaborate mechanical arrangement, the contingencies of reinforcement for shaping and maintaining human behavior would certainly necessitate mechanical help, because man is much more sensitive to precise contingencies than lower organisms. The so‑called teaching machines are, then, such instruments. They call help the teacher to apply the latest advances in the experimental analysis of learning from teaching. Skinner's description of the machine concisely explains the ways in which it functions:

 

The device consists of a box about the size of a small record player. On the top surface is a glazed window through which a question or problem printed on a paper tape may be seen. The child answers the question by moving one or more sliders upon which the digits 0 through 9 are printed. The answer appears in square holes punched in the paper upon which the question is printed. When the answer has been set, the child turns a knob. The operation is as simple as adjusting a television set. If the answer is right, the knob turns freely and can be made to ring a bell or provide some other conditioned reinforcement. If the answer is wrong, the knob will not turn. A counter may be added to tally wrong answers. The knob must then be reversed slightly and a second attempt at a right answer made. (Unlike the flash‑card, the device reports a wrong answer without giving the right answer.) When the answer is right, a further turn of the knob engages a clutch which moves the next problem into place in the window. This movement cannot be completed, however, until the sliders have been returned to zero.(24)

 

There are many different versions of the machine with various features to make its operation more automatic and sophisticated, but the basic function they perform in facilitating learning conditions is essentially the same.

 

One of the advantages in utilizing teaching machines as an instructional aid is that right responses can be immediately reinforced. Often manipulation of the machine will be reinforcing enough to keep the child at work. Also, a single teacher can supervise an entire class working with such machines and at the same time help each child to move at his own rate. In this way gifted as well as slow children call learn without being deterred by the fact that one teacher cannot individually supervise children of such diverse capacities and needs. Furthermore, the machine makes it possible to present subject matter so that the solution of one problem depends on the answer to the preceding problem. The student can eventually progress to a complex repertoire of behaviors. There are still other benefits from teaching machines. Like a good tutor the machine carries on a continuous interchange with the learner and induces constant activity to keep the learner alert. The machine also "demands" that a given point be completely understood before moving on to the next because it presents only those materials for which the student is ready. Like a skillful tutor the machine helps the student to come up with the right answer and it shapes, maintains, and strengthens correct responses by reinforcing them promptly. Moreover, it is possible to present through the machines programs when appropriate courses or teachers are not available. And individuals who cannot be in school for various reasons can "teach" themselves with the machine.

 

If teaching machines call indeed function as effectively as teachers, will they eventually replace teachers? As Skinner rightly points out, teaching machines do not teach in any literal sense at all. They are labor‑saving devices and, therefore, only the mechanized aspects of teaching have been given to machines. This arrangement leaves more time to teachers to carry on those relationships with pupils which cannot be duplicated by an instrument. Teaching machines enable teachers to work with more children than they could ever hope to without instrumental aid. Of course, the use of these machines will change some time‑honored practices. For example, traditional grades or classes will cease to be significant indicators of the child's academic growth; since the machine's instruction makes sure that every step is mastered, grades or marks will be important only as a means of indicating how far a child has advanced. Most of all, A, B, C, D, and F will no longer serve as motivators in the traditional sense, and the fact that each child is permitted to work at his own rate may lessen if not eliminate the social stigma which usually comes with being a slow learner or an underachiever. It is indeed possible that teaching machines and the techniques of programming can be misused to produce submissive individuals who lack both initiative and creativity. On the other hand, the technology of teaching can help us to maximize the development of human potential of those attributes which can make the greatest possible contribution to mankind. Skinner correctly insists that the harm or the benefit coming from the use of teaching machines and programmed learning is not inherent in the technology of teaching. Man must decide what goals are worthiest of pursuing both for the individual and his society. The technology of teaching is only one means of achieving an educational end, and machines and programs should not dictate the direction of our education.

 

PUNISHMENT

 

The use of punishment as a means of controlling human behavior is as old as man. In school we use various punishments to influence children's behavior. Poor academic work is punished with failing grades, while recess periods are often taken away to make children less noisy. Unfortunately, punishment as a technique of controlling human behavior does not always work effectively. Children do not become quiet for any significant length of time by having their recess periods taken away, nor does imprisonment seem to decrease criminal behavior. Certainly, failing grades do not cause children to do better academic work. If what has been said about punishment is true, what role does it play as a variable in shaping and maintaining behavior?

 

In the process of learning, "reinforcement builds up [responses]; [but] punishment is designed to tear them down ."(25) Punishment weakens a response in the sense that it decreases the rate of an operant, but it does not permanently reduce the organism's tendency to respond. That is, "the effect of punishment is a temporary suppression of the behavior, not a reduction in the total number of responses ."(26) According to Skinner, even after the severest and most prolonged punishment the rate of response rose when punishment was discontinued. In other words, the occurrence of the punished behavior is simply postponed rather than permanently eliminated. Moreover, since suppression of unwanted behavior does not either specify or reinforce desirable behavior, punishment is in ineffective means of correcting a child's misbehavior. In punishing a child all we are doing is arranging conditions under which acceptable behavior could be strengthened without clarifying what behaviors are acceptable. As was pointed out earlier, nonreinforcement is a much more effective means of removing unwanted responses permanently.

 

Punishment leads to unfortunate by‑products, especially for teachers, because it often becomes the source of conditioned stimuli evoking incompatible behavior from students. That is, anything that becomes associated with the punished act can turn into a conditioned stimulus. Frequently, unwanted emotional reactions such as fear and anxiety result from punishment. Therefore, if a child is punished for eating noodles with his fingers he may stop eating noodles or cease to eat at all. If certain sexual activities before marriage are punished, such acts, though socially approved after marriage, may become associated with such emotional predispositions as guilt, shame, or even a sense of sin. These emotional by‑products make it extremely difficult for teachers to establish a productive relationship with their pupils. Understandably, Skinner suggests that we avoid using punishment and find other means of weakening undesirable responses. Briefly, unwanted behavior can be weakened or controlled by modifying the circumstances. Certain behavior of young children may be allowed to pass according to developmental schedule and the children allowed to "grow out" of their behavior naturally. Often conditioned responses can be weakened and eliminated by simply letting time pass. Of course, the most effective means of weakening responses is extinction. For example, a child who throws objects to attract the teacher's attention may be allowed to continue his deed without the reinforcement of attention. Another technique is to strengthen incompatible behavior through positive reinforcement. If a child attempts to gain his teacher's attention by leaving his seat and disturbing others, the teacher may pay attention to the child only when he remains in his seat, thereby strengthening the desirable behavior which is incompatible with his early undesirable behavior. In short, direct positive reinforcement is preferable to punishment, because this approach seems to have fewer of the objectionable by‑products usually associated with punishment.

 

In general, punishment is a poor means of controlling pupil behavior. While punishments often temporarily postpone unwanted behavior, students can and do act to avoid aversive stimulation, i.e., punishment. They may find many different ways of escaping; they may daydream or become inattentive or stay away from school altogether. Another unfortunate result of punishment is that the students may counterattack; they may attack openly or they may simply become rude, defiant, and impertinent. Today physical attacks against the teacher are not an impossibility. If the severity of punishment is increased, counterattacks become more frequent until one party withdraws or dominates the scene. Vandalism and unresponsiveness are other consequences of punishment. Usually the reactions to punishment are accompanied by such emotional responses as fear, anxiety, anger, and resentment so that establishment of an educationally productive teacher/pupil relationship becomes almost impossible. Therefore, Skinner recommends that teachers minimize or eliminate the use of punishment as a means of controlling pupil behavior. One way of accomplishing this is to eliminate the conditions which give rise to punishable behavior. For instance, we can separate children who cannot get along with each other, furniture can be made rugged enough so that children cannot damage it, and other means of lighting can be substituted for windows thereby doing away with the possibility of children breaking them or becoming distracted by the activities they see outside. In other words, we should provide conditions in which punishable behavior is not likely to occur and at the same time we should construct programs in which children will be able to succeed most of the time. Another possibility is to reinforce those behaviors which are incompatible with the unwanted behavior. As Skinner puts it, "students are kept busy in unobjectionable ways because ‘the devil always has something for idle hands to do.’ The unwanted behavior is not necessarily strong, but nothing else is at the moment stronger .”(27) So, if a student persistently disrupts the class he might be asked to lead a class discussion Or he could be given the responsibility of managing certain segments of the class's activities, and so on.

 

According to Skinner's evaluation, education today is too dominated by aversive stimuli. Children work to avoid or escape from a series of minor punishments, which may come in the form of the teacher's criticism or ridicule, being sent to the principal, suspension, or even "paddling." "In this welter of aversive consequences, getting the right answer is in itself an insignificant event, any effect of which is lost amid the anxieties, the boredom, and the aggressions which are the inevitable by‑products of aversive control.”(28) In addition to this predominantly punitive atmosphere, the contingencies of reinforcement are far too few and whatever contingencies we have arranged are loose and unsystematic. That is, our schools not only lack carefully planned schedules of reinforcement but too much time lapses between reinforcements given to children. Not infrequently days and even weeks pass before assignments and tests are returned to students with grades. We are also without carefully planned programs to help children advance through a series of progressive approximations to the terminal behavior desired. For Skinner, children's failure and incompetence are direct results of these shortcomings which reflect the inefficiency and the ineffectiveness of our schools. Such a sorry state is in part attributable to the teachers’ failure to understand and apply the recent advances in the experimental analysis of learning.

 

 

 

Notes:

 

1. Robert Dreeben, The  Nature of Teaching. p. 83.

2. B. F. skinner, The Technology of Teaching, p. 5

3. B. F. Skinner, Science and Human Behavior, p. 65.

4. Ibid.

5. Skinner, Teaching, p.62.

6. Edward L. Thorndike, Educational Psychology, Vol II, Psychology of Learning, p. 2.

7. Skinner, Science and Human Behavior, pp. 72-73.

8. B. F. Skinner. "Are Theories of Learning Necessary?" 'File Psychological Review, Vol. 57, No. 4, July 1950,1). 193.

9, Skinner, Science and Human Behavior. V. 91.

10. Skinner. Science and Human Behavior'. p. 76

 11. lbid., 1). 77.

12. Ibid., p. 94.

13. Ibid., 1). 94.

14. Ibid., p. 7 1.

15. Earnest R. Hilgard, Theories of Learning. p. 97.

16. Skinner, Science and Human Behavior, pp. 162‑163.

17. Skinner, Vie Technology of Teaching. 1). 208.

18. Ibid.. p. 2 10.

19. ibid.. 1). 212,

20. Md., p. 155..

21. Ibid., p. 221.

22. Ibid., p. 223.

23. B. F. Skinner, Cumulative Record, 1). 154

24. Ibid.

25. Skinner, Science and Human Behauior, p. 182.

26. Ibid.

27. Ibid., P. 190.

28. Skinner, Cumulative Record, p. 150.

 

Instructor’s note: I have applied bold font to some passages to stress their importance. This bold lettering does not appear in the original publication.