Operant conditioning is based on the idea that we can increase or decrease a certain behaviour by adding a
consequence
For example, if a dog poops on a carpet, we can either provide
reinforcement so the dog does it again or punishments of the
dog stops.
Both reinforcement and punishment can either be positive or
negative, which means we have four possible ways to teach
the dog a lesson.
We can draw the four options in a table:
1. If reinforcement is positive, we add something pleasant, like a cookie, to increase the likelihood of a
behaviour.
2. If reinforcement is negative, we still want to increase
the desired behaviour, this time by removing
something unpleasant like the leash.
3. If punishment is positive, we add an unpleasant
response to decrease behaviour.
4. When punishment is negative, we also want to
decrease behaviour now by removing something
pleasant, like the comfy carpet.
If we stop any sort of manipulation, the conditioned behaviour will eventually disappear again. This is called
extinction.
Operant conditioning was first studied by Edward L. Thorndike and later made famous by the work of B.F. Skinner.
Skinner believed that organisms are doing what they do
naturally until they accidentally encounter a stimulus that
creates conditioning, which results in a change in behaviour.
To test this, he placed a rat inside an operant conditioning
chamber, which later became known as the ‘Skinner Box’.
Among other things inside the box, was a lever that would
release food when pressed.
Conditioning happens in a three-term contingency, known as the ABCs of behaviour.
A = antecedent
the rat accidentally hits the lever that triggers the release of
food
B = behaviour
and refers to the response: the rat keeps pressing the lever
C = consequence
food keeps coming out