Photo by Quang Nguyen Vinh
Background photo by Adrien Olichon
The mind as a black box: a history
B.F. Skinner
(1904-1990)
Pioneer of Shaping
Trained pigeons and rats to perform complex actions by rewarding successive approximations of goal to a final desired behavior, called shaping.
What this shows:
More complex behaviors can be learned without intervening reinforcement. Conversely, one can extinguish an unwanted behavior by withdrawing reinforcement.
Edward Thorndike
(1874-1949)
Founder of Educational Psychology
Pioneer of Law of Effect
Thorndike's puzzle box experiments timed how long it took a cat to press a lever to access food. Subsequently, the cat learned to access the food quickly. In doing so, he pioneered the study of operant conditioning, learned behavior from consequences of actions.
What this shows:
Behavior with positive results will likely be repeated; similarly, negative results will eventually cease negative behaviors
Robert Gagne
(1916-2002)
Educational Psychologist
Applied specific instructional design strategies for different types of learning outcomes. In 1965, he presented
eight conditions of learning, based on the behaviorist stimulus-response model, which led to:
Gagne's Nine Events of Instruction:
A method of achieving full content retention and a thorough learning experience.
John B. Watson
(1878-1958)
Multi-faceted Behaviorist
Removed any consideration of conscious experience when analyzing behavior. His experiments on Little Albert (9 months old infant) applied negative Pavlovian stimuli to make him fear a rat, which he didn't fear before. Watson made him afraid by hitting a steel bar with a hammer in the presence of a rat.
What this shows:
The folly of behaviorism's disregard of consciousness. In his quest to observe and predict behavior on the outside, Watson disregarded its effects on consciousness and in the process traumatized an infant for the sake of "science."
Ivan Pavlov
& Classical Conditioning
In psychology speak, Pavlov's experiments used conditioned stimulus (CS) (previously neutral stimulus) and paired it with an unconditional stimulus (US) (naturally occurring, such as food) to cause a conditioned response, a learned response to that once neutral stimulus. An unconditioned response, such as salivation, then could occur with that conditioned stimulus. This learned response he termed conditional reflex.
In further experiments, he studied the variables of timing and consistency's effects on conditioning. Extinction, or reduction in response, happens when CS occurs repeatedly without the US. Spontaneous recovery can still occur with the conditioned stimulus going solo. So extinction is never absolute.
Generalization similarly is the phenomenon of responding to a stimulus that resembles the original stimulus. Conversely, generalization turns to discrimination with further conditioning, when the dog differentiates and doesn't respond to a somewhat similar stimulus. With discrimination, one delineates difference with a stimulus that is similar but not identical.
To take it a further step, second-order conditioning, which can occur in some cases, involves an existing CS serving as a US pairing with another new CS. Confusing? It's a neutral stimulus, like a black square, which Pavlov paired with the tone. The black square itself made the dogs salivate.
Implications for teaching & learning
According to Brau, Fox, and Robinson, these experiments illustrate
3 major behaviorist principles about learning and behavior:
-
Behavior as learned from the environment
-
Observable behavior as prerequisite to learning.
-
All behavior as products of formula stimulus-response
Generalization, discrimination, and second-order conditioning could further be applied to teaching, when drawing comparisons between concepts as an introduction to a new concept. Discrimination is helpful when creating contrasts. Second-order conditioning provides other sensory stimulus to stimulate a response, for example, in a K-12 classroom to get student attention or silence with a bell or lights out.
Edward Thorndike
& Law of Effect
Unlike classical conditioning, where biological, reflexive responses are associated with new stimuli, operant conditioning involves learned behavior. The cat learns how pressing a lever causes the door to food to open. The cat learns based on consequences of its actions or behavior. In similar ways, the cat can learn a new action.
Based on his experiments, the law of effect states that responses that produce pleasant results will more likely recur, and vice versa with unpleasant outcomes in curbing repetition.
Implications for teaching & learning
The teacher, controlling the classroom can similarly encourage repeat performance of certain behaviors or learning with positive reinforcement. They can likewise discourage other behaviors with less pleasant consequences. Even marking a page with an X could discourage making the same error.
B.F. Skinner
& Operant Conditioning
Pioneer of Radical Behaviorism
Radical behaviorism includes thoughts and emotions in analyzing behavior. With applied behavior analysis, Skinner attempted to analyze internal processes with observable behavior. He believed that "both can be controlled by environmental variables."
(Kimmons et al.)
Published in 1938, Skinner's book, The Behavior of Organisms, applies operant conditioning to the study of human and animal behavior. He expands on Thorndike's law of effect by categorizing reinforcement (encouraging behavior) and punishment (discouraging behavior) into five scenarios:
-
Positive reinforcement: + positive stimulus --> encourage behavior
-
Escape: - negative stimulus --> encourage behavior
-
Active avoidance: prevent negative stimulus --> encourage behavior
-
Positive punishment: + negative stimulus --> discourage behavior
-
Negative punishment: - positive stimulus -->discourage behavior
Skinner further delineated five different schedules to apply reinforcement:
-
Continuous reinforcement: after every action
-
Fixed interval reinforcement: after a fixed amount of time
-
Variable interval reinforcement: after a random amount of time
-
Fixed ratio reinforcement: after sought behavior occurs a set number of times
-
Variable ratio reinforcement: after sought behavior occurs a random number of times
From his findings, interval schedules are easier to extinguish than ratio schedules and fixed are easier than variable. Thus the most effective schedule is the variable ratio reinforcement schedule.
Implications for teaching & learning design
Skinner himself presented principles of educating. He emphasized positive reinforcement as the best path for student learning. He also advocated for student engagement rather than passive absorption.
A precursor to Gagné's Nine Events, Skinner developed steps for proper teaching:
-
Ensure clear comprehension of action or performance
-
Simplify task by separating into smaller steps. Then work from simple up to complex.
-
Allow the learner to perform each step, providing positive reinforcements when correct.
-
Monitor to ensure success until the goal is reached.
-
Maintain performance by random reinforcement.
Skinner's steps replicate his shaping experiments where he broke down complex goals into smaller steps with positive reinforcement. The learner's conscious acquisition of steps or actions required reflects his radical behaviorism. The student is actively participating in learning and knowledge acquisition.
Teachers who apply operant conditioning can use positive reinforcement and punishment (as a last measure) to modify student behavior. This could be disciplinary in K-12, as in classroom management or academic in praising a thoughtful answer or rewarding with a sticker (younger kids). In Instructional Design, an incorrect answer would get a red X result and correct would get a green circle to reinforce the correct answer.
Robert Gagné
& Nine Events of Instruction
In 1965, Gagné introduced 8 types of learning conditions based on stimulus-response learning, which he culled from behaviorist learning theories. From that, he came up with his Nine Events of Instruction:
-
Engage by gaining learner attention.
-
Inform learners of objective.
-
Stimulate learner recall of pre-existing knowledge.
-
Present structured content with organized sequencing.
-
Guide and support learner in comprehension and acquisition of knowledge/skills with scaffolding.
-
Elicit performance through practicing what learner has acquired
-
Provide feedback on performance to determine progress.
-
Assess performance to discover extent of learning objectives achieved
-
Enhance retention through review and promoting transfer of knowledge in other contexts.
Implications (of behaviorism) for instructional (learning) design:
From Skinner's steps for effective teaching to Gagne's Nine Events of Instruction, behaviorism provides a stimulus-response model to support learning design. Based on empirical research, it supports the design of a feedback mechanism and positive reinforcements as key tools for the designer to ensure knowledge acquisition.
Both models of learning can provide frameworks for designing an effective learning experience, as per Devlin Peck, on Gagne's model.
The latter provides a step by step how-to for novice designers who have yet to do their own research via learner feedback. Receiving immediate feedback on whether an answer is correct or incorrect would be one application of feedback, similar to the early instructional design example of Skinner's own teaching machine. The correct answer would provide the reward of a green circle, a form of positive reinforcement.
Also, setting clear objectives with measurable progress enables the learner to feel empowered. Further chunking down the information into smaller digestible pieces makes learning accessible as well as encourages progress with positive reinforcement. Online discussion with a clear framework of topics encourages learner engagement with a purpose.
Strength and limitations of behaviorism in Higher Education >